Download presentation
Presentation is loading. Please wait.
Published byEverett Kerry Hodge Modified over 9 years ago
1
Joint Power and Channel Minimization in Topology Control: A Cognitive Network Approach J ORGE M ORI A LEXANDER Y AKOBOVICH M ICHAEL S AHAI L EV F AYNSHTEYN
2
Problem Definition An ad-hoc wireless network topology faces two problems: Power consumption ◦ Mobile devices have limited power supply Overcrowded spectrum ◦ Too many devices try to use the same frequency simultaneously resulting in inteference
3
Previous Work Interference avoidance has led to three viewpoints: Radio ◦ Minimize channel interference at link-level Topology ◦ Channel assignments made in an already existing topology Network ◦ A combination of channel assignment with routing
4
Previous Work Two assumptions: ◦ Power control ◦ Channel control Power approaches: ◦ Bukhart, assigning weights to connections that are equal to the number of radios the connection interferes with. Used with MMLIP, MAICPC and IMST algorithms. ◦ Use of a radio interference function, in which the interference contribution of a radio is the maximum interference of all connections incident upon it. Used in MMMIP and LILT algorithms.
5
Previous Work (Cont.) Channel Control: Connectivity of the network is fixed and that two radios can only communicate if they share a common channel, of which there are fewer available than needed.
6
Researches’ Approach Their work assume that radios regulate both power and channel selection. A two-phased, two cognitive element approach to: ◦ Power assignment ◦ Channel assignment A game-theoretic model is used to analyze the behaviors of these elements.
7
Methodology A two-phased game model is used: The first phase is a pure power control game where POWERCONTROL elements attempt to minimize their transmit power level and maintain network connectivity. The output of the first phase is a power-efficient topology, which is fed into the second phase, where CHANNELCONTROL elements selfishly play the channel selection game.
8
Methodology (Cont.) The POWERCONTROL elements utilize δ - Improvement Algorithm (DIA):
9
Methodology (Cont.) LOCAL-RS - a localized version of the Random Sequential coloring algorithm:
10
Optimized Approach – Power Control Use Minimum Spanning Tree (MST) algorithm to solve Power Control problem: G = (V, E,W) denotes the input undirected stochastic graph: ◦ V - vertex set ◦ E - edge set ◦ matrix W - probability distribution of the edge weight in the stochastic graph Each node of the graph is a learning automaton Resulting network is described by a triple, where: ◦ A = { A 1, A 2,..., A m } - set of the learning automata ◦ α = { α 1, α 2,..., α m } - set of actions in which α i = { α i1, α i2,..., α ij,..., α ir } defines the set of actions that can be taken by learning automata A i for each α ∈ α i ◦ Weight w i j is the cost associated with edge e (i, j)
11
MST Algorithm Step 1. The learning automata are sequentially and randomly activated and choose one of their actions according to their action probability vectors. Automata are sequentially activated until either the number of the selected edges is greater than or equal to (n − 1) or there are no more automata which have not already been activated. Step 2. The weight of the traversed spanning tree is computed and then compared to the dynamic threshold T k :, where Step 3. If the weight of the traversed spanning tree is less than or equal to the dynamic threshold, i.e. W Շ i (k +1) ≤T k, the activated automata are rewarded with probability d i (k) in accordance with L R-P learning algorithm, else the activated automata are penalized: Step 4. Steps 2 and 3 are repeated until the product of the probabilities of the edges along the traversed spanning tree is greater than a certain threshold or the number of traversed trees exceeds a pre-specified threshold.
12
Optimized Approach – Channel Control Resulting network is described by the pair, where: ◦ A = {A 1, A 2, …, A m } denotes the set of learning automata ◦ α = { α 1, α 2, …, α m } denotes the set of actions ◦ α i = { α i1, α i2, …, α ir } defines the set of actions that can be taken by learning automaton A i, for each α i ∈ α The set of colors with which each vertex v i can be colored from the set of actions can be taken by learning automaton A i
13
Channel Control Algorithm Step 1. Color selection phase ◦ For all learning automata do in parallel Each automaton A i. Pick the colors that have not being selected yet Vertex V i is colored with the color corresponding to the selected action The selected color is added to the list of colors (color-set) with which the graph may be legally colored at this stage. Step 2. Updating the dynamic threshold and action probabilities ◦ If the cardinality of the color-set (in a legal coloring) created is less than or equal to dynamic threshold T k, then Threshold T k is set to the cardinality of the color-set selected in this stage. All learning automata reward their actions and update action probably vectors using a L R-P reinforcement scheme ◦ Otherwise Each learning automaton updates its probability vector by penalizing its chosen action. Step 3. Stopping Condition ◦ The process of selecting legal colorings of the graph and updating the action probabilities are repeated until the product of the probability of choosing the colors of a legal coloring called PLC is greater than a certain threshold or the number of colorings exceeds a pre-specified threshold. The coloring which is chosen last before stopping the algorithm is the coloring with the smallest color- set among all proper colorings.
14
QUESTIONS? Thank you.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.