Traditional Approaches to Modeling and Analysis

Slides:



Advertisements
Similar presentations
5.1 Real Vector Spaces.
Advertisements

1 Transportation problem The transportation problem seeks the determination of a minimum cost transportation plan for a single commodity from a number.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Part 3: The Minimax Theorem
DYNAMIC POWER ALLOCATION AND ROUTING FOR TIME-VARYING WIRELESS NETWORKS Michael J. Neely, Eytan Modiano and Charles E.Rohrs Presented by Ruogu Li Department.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Markov Chains 1.
11 - Markov Chains Jim Vallandingham.
Chapter 17 Markov Chains.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Visual Recognition Tutorial
Kuang-Hao Liu et al Presented by Xin Che 11/18/09.
Introduction to stochastic process
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
*Sponsored in part by the DARPA IT-MANET Program, NSF OCE Opportunistic Scheduling with Reliability Guarantees in Cognitive Radio Networks Rahul.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Game Theory & Cognitive Radio part A Hamid Mala. 2/28 Presentation Objectives 1. Basic concepts of game theory 2. Modeling interactive Cognitive Radios.
Linear Equations in Linear Algebra
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Monte Carlo Methods in Partial Differential Equations.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Decentralised load balancing in closed and open systems A. J. Ganesh University of Bristol Joint work with S. Lilienthal, D. Manjunath, A. Proutiere and.
1 11 Subcarrier Allocation and Bit Loading Algorithms for OFDMA-Based Wireless Networks Gautam Kulkarni, Sachin Adlakha, Mani Srivastava UCLA IEEE Transactions.
Discrete Mathematical Structures (Counting Principles)
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Appendix A. Mathematical Background EE692 Parallel and Distribution Computation.
ECE559VV – Fall07 Course Project Presented by Guanfeng Liang Distributed Power Control and Spectrum Sharing in Wireless Networks.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Competitive Scheduling in Wireless Networks with Correlated Channel State Ozan.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Asymptotic behaviour of blinking (stochastically switched) dynamical systems Vladimir Belykh Mathematics Department Volga State Academy Nizhny Novgorod.
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
Resource Allocation in Hospital Networks Based on Green Cognitive Radios 王冉茵
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 3 Iterative Method for Nonlinear problems EE692 Parallel and Distribution.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Linear & Nonlinear Programming -- Basic Properties of Solutions and Algorithms.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Chance Constrained Robust Energy Efficiency in Cognitive Radio Networks with Channel Uncertainty Yongjun Xu and Xiaohui Zhao College of Communication Engineering,
ECE-7000: Nonlinear Dynamical Systems 3. Phase Space Methods 3.1 Determinism: Uniqueness in phase space We Assume that the system is linear stochastic.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Abstractions Eric Feron. Outline Principles of abstraction Motivating example Abstracting variables Abstracting functions Abstracting operators Recommended.
 Matrix Operations  Inverse of a Matrix  Characteristics of Invertible Matrices …
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
Markov Chains.
Discrete-time Markov chain (DTMC) State space distribution
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Discrete-time markov chain (continuation)
Ch7: Hopfield Neural Model
Prof. Dr. Holger Schlingloff 1,2 Dr. Esteban Pavese 1
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Section 4.1 Eigenvalues and Eigenvectors
Hidden Markov Models Part 2: Algorithms
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Stability Analysis of Linear Systems
Totally Asynchronous Iterative Algorithms
16. Mean Square Estimation
Presentation transcript:

Traditional Approaches to Modeling and Analysis

Outline Concepts: Models Dynamical Systems Model Fixed Points Optimality Convergence Stability Models Contraction Mappings Markov chains Standard Interference Function

while noting the relevant timing model Basic Model Dynamical system A system whose change in state is a function of the current state and time Autonomous system Not a function of time OK for synchronous timing Characteristic function Evolution function First step in analysis of dynamical system Describes state as function of time & initial state. For simplicity while noting the relevant timing model

Connection to Cognitive Radio Model g = d/  t Assumption of a known decision rule obviates need to solve for evolution function. Reflects innermost loop of the OODA loop Useful for deterministic procedural radios (generally discrete time for our purposes)

Example: ([Yates_95]) Power control applications Defines a discrete time evolution function as a function of each radio’s observed SINR, j , each radio’s target SINR and the current transmit power Applications Fixed assignment - each mobile is assigned to a particular base station Minimum power assignment - each mobile is assigned to the base station in the network where its SINR is maximized Macro diversity - all base stations in the network combine the signals of the mobiles Limited diversity - a subset of the base stations combine the signals of the mobiles Multiple connection reception - the target SINR must be maintained at a number of base stations.

Applicable analysis models & techniques Markov models Absorbing & ergodic chains Standard Interference Function Can be applied beyond power control Contraction mappings Lyapunov Stability

Differences between assumptions of dynamical system and CRN model Goals of secondary importance Technically not needed Not appropriate for ontological radios May not be a closed form expression for decision rule and thus no evolution function Really only know that radio will “intelligently” – work towards its goal Unwieldy for random procedural radios Possible to model as Markov chain, but requires empirical work or very detailed analysis to discover transition probabilities

Steady-states Recall model of <N,A,{di},T> which we characterize with the evolution function d Steady-state is a point where a*= d(a*) for all t t * Obvious solution: solve for fixed points of d. For non-cooperative radios, if a* is a fixed point under synchronous timing, then it is under the other three timings. Works well for convex action spaces Not always guaranteed to exist Value of fixed point theorems Not so well for finite spaces Generally requires exhaustive search

Fixed Point Definition Given a mapping a point is said to be a fixed point of f if In 2-D fixed points for f can be found by evaluating where and intersect. 1 How much information do we need to have to know that a function has a fixed point/Nash equilibrium? f(x) x 1

Visualizing Fixed Point Existence Consider continuous X compact, convex Fixed Point must exist 1 f(x) x 1

Convex Sets Definition Convex Set Let S  n. S is said to be convex if for all x, y  S, the point w = x + (1- )y is in S for all  [0,1]. Equivalent expression A set S is convex if for all possible pairs of points, x, y, drawn from S the line segments joining x, y is also in S. Not Convex Convex Convex x y

Compact Sets Definition Compact Set A bounded set S is compact if there is no point xS such that the limit of a sequence formed entirely from elements in S is x. Equivalent – closed and bounded Compact sets Non-compact sets Any closed finite interval [0,1] (0,1] Closed n-Ball (A filled sphere) [0,) Closed Disk (Note, mathematically a disk is just a ball)

Continuous Function Definition Continuous Function A function f: XY is continuous if for all x0X the following three conditions hold: f(x0)  Y Note being differentiable at x0 implies continuity at x0, but continuity does not imply differentiability A continuous but not differentiable function

Visualizing Fixed Point Existence Consider continuous X not compact, convex or X compact, not convex Fixed point need not exist 1 f(x) x 1

Brouwer’s Fixed Point Theorem Let f :X X be a continuous function from a non-empty compact convex set X  n, then there is some x*X such that f(x*) = x*. (Note originally written as f :B B where B = {x  n : ||x||1} [the unit n-ball])

Visualizing Fixed Point Existence Consider f :X X as an upper semi-continuous correspondence X compact, convex 1 f(x) x 1

Kakutani’s Fixed Point Theorem Let f :X X be a upper semi-continuous convex valued correspondence from a non-empty compact convex set X  n, then there is some x*X such that x*  f(x*)

Example steady-state solution Consider Standard Interference Function

Optimality In general we assume the existence of some design objective function J:A The desirableness of a network state, a, is the value of J(a). In general maximizers of J are unrelated to fixed points of d. Figure from Fig 2.6 in I. Akbar, “Statistical Analysis of Wireless Systems Using Markov Models,” PhD Dissertation, Virginia Tech, January 2007

Identification of Optimality If J is differentiable, then optimal point must either lie on a boundary or be at a point where the gradient is the zero vector

Convergent Sequence A sequence {pn} in a Euclidean space X with point pX such that for every >0, there is an integer N such that nN implies dX(pn,p)<  This can be equivalently written as or

Example Convergent Sequence Given , choose N=1/ , p=0 1 Establish convergence by applying definition Necessitates knowledge of p.

Convergent Sequence Properties

Cauchy Sequence A sequence {pn} in a metric space X such that for every >0, there is an integer N such that if

Example Cauchy Sequence Given , choose N=2/, p=0 1 Establish convergence by applying definition No need to know p In k, every Cauchy sequence converges, and every convergent sequence is Cauchy

Monotonic Sequences A sequence {sn} is monotonically increasing if . A sequence {sn} is monotonically decreasing if (Note: some authors use the inclusion of the equals condition to define a sequence to be respectively monotonically nondecreasing or monotonically nonincreasing.). A sequence which is either monotonically increasing or monotonically decreasing is said to be monotonic.

Convergent Monotonic Sequences Suppose is a monotonic in X. Then converges if X is bounded. Note that also converges if X is compact.

Showing convergence with nonlinear programming  Left unanswered: where does  come from?

Stability Stable, but not attractive Attractive, but not stable

Lyapunov’s Direct Method Left unanswered: where does L come from?

Comments on analysis We just covered some very general techniques for showing that a system has a fixed point (steady-state), converges, and is stable. Could apply these to every problem independently, but can sometimes be painful (and nonobvious – where does Lyapunov function come from, convergence assumes we already know a fixed point) My preferred approach is to analyze general models and then show that particular problems satisfy conditions of one of the general models.

Analysis models appropriate for dynamical systems Contraction Mappings Identifiable unique steady-state Everywhere convergent, bound for convergence rate Lyapunov stable (=) Lyapunov function = distance to fixed point General Convergence Theorem (Bertsekas) provides convergence for asynchronous timing if contraction mapping under synchronous timing Standard Interference Function Forms a pseudo-contraction mapping Can be applied beyond power control Markov Chains (Ergodic and Absorbing) Also useful in game analysis

Contraction Mappings Every contraction is a pseudo-contraction Every pseudo-contraction has a fixed point Every pseudo-contraction converges at a rate of  Every pseudo-contraction is globally asymptotically stable Lyapunov function is distance to the fixed point) A Pseudo-contraction which is not a contraction

General Convergence Theorem A synchronous contraction mapping also converges asynchronously

Standard Interference Function Conditions Suppose d:AA and d satisfies: Positivity: d(a)>0 Monotonicity: If a1a2, then d(a1)d(a2) Scalability: For all >1, d(a)>d( a) d is a pseudo-contraction mapping [Berggren] under synchronous timing Implies synchronous and asynchronous convergence Implies stability R. Yates, “A Framework for Uplink Power Control in Cellular Radio Systems,” IEEE JSAC., Vol. 13, No 7, Sep. 1995, pp. 1341-1347. F. Berggren, “Power Control, Transmission Rate Control and Scheduling in Cellular Radio Systems,” PhD Dissertation Royal Institute of Technology, Stockholm, Sweden, May, 2001.

Yates’ power control applications Target SINR algorithms Fixed assignment - each mobile is assigned to a particular base station Minimum power assignment - each mobile is assigned to the base station in the network where its SINR is maximized Macro diversity - all base stations in the network combine the signals of the mobiles Limited diversity - a subset of the base stations combine the signals of the mobiles Multiple connection reception - the target SINR must be maintained at a number of base stations.

Example steady-state solution Consider Standard Interference Function

Markov Chains Describes adaptations as probabilistic transitions between network states. d is nondeterministic Sources of randomness: Nondeterministic timing Noise Frequently depicted as a weighted digraph or as a transition matrix

General Insights ([Stewart_94]) Probability of occupying a state after two iterations. Form PP. Now entry pmn in the mth row and nth column of PP represents the probability that system is in state an two iterations after being in state am. Consider Pk. Then entry pmn in the mth row and nth column of represents the probability that system is in state an two iterations after being in state am.

Steady-states of Markov chains May be inaccurate to consider a Markov chain to have a fixed point Actually ok for absorbing Markov chains Stationary Distribution A probability distribution such that * such that *T P =*T is said to be a stationary distribution for the Markov chain defined by P. Limiting distribution Given initial distribution 0 and transition matrix P, the limiting distribution is the distribution that results from evaluating

Ergodic Markov Chain [Stewart_94] states that a Markov chain is ergodic if it is a Markov chain if it is a) irreducible, b) positive recurrent, and c) aperiodic. Easier to identify rule: For some k Pk has only nonzero entries (Convergence, steady-state) If ergodic, then chain has a unique limiting stationary distribution.

Shortcomings in traditional techniques Fixed point theorems provide little insight into convergence or stability Lyapunov functions hard to identify Contraction mappings rarely encountered Doesn’t address nondeterministic algorithms Genetic algorithms Analyze one algorithm at a time – little insight into related algorithms Not very useful for finite action spaces No help if all you have is the cognitive radios’ goal and actions

Absorbing Markov Chains Absorbing state Given a Markov chain with transition matrix P, a state am is said to be an absorbing state if pmm=1. Absorbing Markov Chain A Markov chain is said to be an absorbing Markov chain if it has at least one absorbing state and from every state in the Markov chain there exists a sequence of state transitions with nonzero probability that leads to an absorbing state. These nonabsorbing states are called transient states. a5 a0 a1 a2 a3 a4

Absorbing Markov Chain Insights ([Kemeny_60] ) Canonical Form Fundamental Matrix Expected number of times that the system will pass through state am given that the system starts in state ak. nkm (Convergence Rate) Expected number of iterations before the system ends in an absorbing state starting in state am is given by tm where 1 is a ones vector t=N1 (Final distribution) Probability of ending up in absorbing state am given that the system started in ak is bkm where

Two-Channel DFS Decision Rule Goal Timing Random timer set to go off with probability p=0.5 at each iteration

Analysis Models

Model Steady States

Model Convergence

Model Stability

Shortcomings in “traditional” techniques Fixed point theorems provide little insight into convergence or stability Lyapunov functions hard to identify Contraction mappings rarely encountered Doesn’t address nondeterministic algorithms Genetic algorithms Not very useful for finite action spaces No help if all you have is the cognitive radios’ goal and actions

Comments No unified method for analyzing cognitive radio interactions Random collection of methods for different problems Perhaps a bit of a stretch to call it “traditional” with respect to cognitive radios Is not suitable for analyzing radios with