Markov Nets are systems in which: given an interaction initiated by an agent towards an agent, one can decide in advance the time at which the second agent.

Slides:



Advertisements
Similar presentations
Lecture 7. Distributions
Advertisements

The Helmholtz Machine P Dayan, GE Hinton, RM Neal, RS Zemel
Agent-based Modeling: A Brief Introduction Louis J. Gross The Institute for Environmental Modeling Departments of Ecology and Evolutionary Biology and.
Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
On the Genetic Evolution of a Perfect Tic-Tac-Toe Strategy
Dynamic Bayesian Networks (DBNs)
Multiscale Stochastic Simulation Algorithm with Stochastic Partial Equilibrium Assumption for Chemically Reacting Systems Linda Petzold and Yang Cao University.
In this handout Stochastic Dynamic Programming
STAT 497 APPLIED TIME SERIES ANALYSIS
Corporate Banking and Investment Mathematical issues with volatility modelling Marek Musiela BNP Paribas 25th May 2005.
Population dynamics of infectious diseases Arjan Stegeman.
An Introduction to Markov Decision Processes Sarah Hickmott
Operations Research: Applications and Algorithms
Reliable System Design 2011 by: Amir M. Rahmani
FIN 685: Risk Management Topic 5: Simulation Larry Schrenk, Instructor.
Robust Mechanisms for Information Elicitation Aviv Zohar & Jeffrey S. Rosenschein The Hebrew University.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
1 Spare part modelling – An introduction Jørn Vatn.
Descriptive Modelling: Simulation “Simulation is the process of designing a model of a real system and conducting experiments with this model for the purpose.
Lecture 11 – Stochastic Processes
Computer Simulation A Laboratory to Evaluate “What-if” Questions.
1 Performance Evaluation of Computer Networks: Part II Objectives r Simulation Modeling r Classification of Simulation Modeling r Discrete-Event Simulation.
Generalized Semi-Markov Processes (GSMP)
Modeling and Simulation CS 313
MA Dynamical Systems MODELING CHANGE. Introduction to Dynamical Systems.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Review and Preview This chapter combines the methods of descriptive statistics presented in.
MA Dynamical Systems MODELING CHANGE. Modeling Change: Dynamical Systems A dynamical system is a changing system. Definition Dynamic: marked by.
Statistical Analysis of Loads
A Multivariate Statistical Model of a Firm’s Advertising Activities and their Financial Implications Oleg Vlasov, Vassilly Voinov, Ramesh Kini and Natalie.
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
MA Dynamical Systems MODELING CHANGE. Modeling Change: Dynamical Systems ‘Powerful paradigm’ future value = present value + change equivalently:
Discrete Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4)
© The McGraw-Hill Companies, 2005 TECHNOLOGICAL PROGRESS AND GROWTH: THE GENERAL SOLOW MODEL Chapter 5 – second lecture Introducing Advanced Macroeconomics:
The Structure, Function, and Evolution of Vascular Systems Instructor: Van Savage Spring 2010 Quarter 3/30/2010.
Multiple Random Variables Two Discrete Random Variables –Joint pmf –Marginal pmf Two Continuous Random Variables –Joint Distribution (PDF) –Joint Density.
Project funded by the Future and Emerging Technologies arm of the IST Programme FET-Open scheme Project funded by the Future and Emerging Technologies.
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
Managing Server Energy and Operational Costs Chen, Das, Qin, Sivasubramaniam, Wang, Gautam (Penn State) Sigmetrics 2005.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Lecture V Probability theory. Lecture questions Classical definition of probability Frequency probability Discrete variable and probability distribution.
CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains Dr. Anis Koubâa 11 Nov 2008 Al-Imam Mohammad.
Probability Refresher COMP5416 Advanced Network Technologies.
MA354 Dynamical Systems T H 2:30 pm– 3:45 pm Dr. Audi Byrne.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Motivation and aims The Belousov-Zhabotinsky system The Java applet References 1.Stochastic Modelling Web Module:
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
4 th International Conference on Service Oriented Computing Adaptive Web Processes Using Value of Changed Information John Harney, Prashant Doshi LSDIS.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Chapter 5 Probability Distributions 5-1 Overview 5-2 Random Variables 5-3 Binomial Probability Distributions 5-4 Mean, Variance and Standard Deviation.
Copyright © 2009 Pearson Education, Inc. Chapter 11 Understanding Randomness.
1 Ka-fu Wong University of Hong Kong A Brief Review of Probability, Statistics, and Regression for Forecasting.
Modelling Complex Systems Video 4: A simple example in a complex way.
Topic Overview and Study Checklist. From Chapter 7 in the white textbook: Modeling with Differential Equations basic models exponential logistic modified.
The Pure Birth Process Derivation of the Poisson Probability Distribution Assumptions events occur completely at random the probability of an event occurring.
Psychology 202a Advanced Psychological Statistics
Probability & Statistics Probability Theory Mathematical Probability Models Event Relationships Distributions of Random Variables Continuous Random.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Discrete-time markov chain (continuation)
CONTEXT DEPENDENT CLASSIFICATION
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Discrete-time markov chain (continuation)
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
1.1 Dynamical Systems MODELING CHANGE
Lecture 11 – Stochastic Processes
Presentation transcript:

Markov Nets are systems in which: given an interaction initiated by an agent towards an agent, one can decide in advance the time at which the second agent will be affected assuming it would not undergo another event in the meantime. This is implicitely allowing the biliard balls stuff )or in fact any short range interaction, or even interceptable rockets being used... Etc)

Appendix The present paper main focuss was the practical implementation of the Markov Nets / Webs on the NatLab platform. Thus we kept the main text free of formal definitions. Still, the concept of Markov Net / Web needs to be defined and in the future studied. In particular this will allow the formal study of a host of open problems: the exystence and construction of the process, its relation to Markov Chains and differential stochastic processes, its limitations, its dynamical stability (multiple equilibria, feedback or evolutionary strategies for stabilizing / regulating it), its applicability to other real life situations etc.

Definition of a Markov Net Given a set of Agents indexed by an integer i=1,..,N, at any time t, each of them can be in a State S(i,t). The states space can be parametrized by some discrete or continuous parameters. An agent i can undergo at any time T (in cetrain conditions described below) an event Event E(i,T). The set of events that can occur to an given agent i belong to a space that again can be parametrized by discrete and /or continuous parameters. A Certain state S(i,t) may have a probability rate M(E(i,t) | i,t,S(i,t)) to generate / undergo an event E(i,t). In turn, an event E(i,T) may have a probability C(S'(i,T) | i,T,E(i,T),S(i,T)) to change the current state S(i,T) of the agent i. into a new state S'(i,T). An event E(i,T) can also initiate an interactions I(i,T,E(i,T),j) with another agent j. with a probability D( I(i,j,T,E(i,T)) | i,j,T,E(i,T),S(i,T)) ). The interaction I(i,j,T,E(i,T)) ), may cause j to undergo at time T'>T an event F(j,T') with probability G(T',F(j,T') | i,j,T,I(i,T,E(i,T),j),S(j,T')).

Note the crucial point that the probability distribution G that decides T' depends on the state of j at time T'. This may look like a problem in as far as the definition of T' is self referential. In particular, events undergone by the agent j at times T^ < T' would affect T'. However, there is no problem to implement this definition without the need of solving transcendental equations. All one has to do is the following: - at the time T, after the random interaction I(i,j,T,E(i,T)) ) has been decided, one estimates (extracts the random value of) T' according to the probability distribution G(T',F(j,T') | i,j,T,I(i,T,E(i,T),j),S(j,T)) i.e. with the assumption that S(j,T') at time T' will still be S(j,T')= S(j,T). - However, each time T^< T' that the state S(j,T^) of j changes, T' is extracted again from the new probability distribution G(T',F(j,T') | i,j,T,I(i,T,E(i,T),j),S(j,T^)) i.e with S(j,T')= S(j,T^). Of course the new allowed values of T' will then be only T'>T^.