Instructor: Spyros Reveliotis IE7201: Production & Service Systems Engineering Fall 2009 Closure.

Slides:



Advertisements
Similar presentations
Design of Experiments Lecture I
Advertisements

Petri Nets Section 2 Roohollah Abdipur.
1 A class of Generalized Stochastic Petri Nets for the performance Evaluation of Mulitprocessor Systems By M. Almone, G. Conte Presented by Yinglei Song.
Dynamic Bayesian Networks (DBNs)
Computational Stochastic Optimization:
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
All Hands Meeting, 2006 Title: Grid Workflow Scheduling in WOSE (Workflow Optimisation Services for e- Science Applications) Authors: Yash Patel, Andrew.
FIN 685: Risk Management Topic 5: Simulation Larry Schrenk, Instructor.
1 PERFORMANCE EVALUATION H Often one needs to design and conduct an experiment in order to: – demonstrate that a new technique or concept is feasible –demonstrate.
Nov 14 th  Homework 4 due  Project 4 due 11/26.
1 Petri Nets H Plan: –Introduce basics of Petri Net models –Define notation and terminology used –Show examples of Petri Net models u Calaway Park model.
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
Discretization Pieter Abbeel UC Berkeley EECS
Probabilistic Verification of Discrete Event Systems Håkan L. S. Younes.
1 PERFORMANCE EVALUATION H Often in Computer Science you need to: – demonstrate that a new concept, technique, or algorithm is feasible –demonstrate that.
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
Petri Nets An Overview IE 680 Presentation April 30, 2007 Renata Kopach- Konrad.
ADITI BHAUMICK ab3585. To use reinforcement learning algorithm with function approximation. Feature-based state representations using a broad characterization.
Dimitrios Konstantas, Evangelos Grigoroudis, Vassilis S. Kouikoglou and Stratos Ioannidis Department of Production Engineering and Management Technical.
1. An Overview of the Data Analysis and Probability Standard for School Mathematics? 2.
Instructor: Spyros Reveliotis homepage: IE7201: Production & Service Systems Engineering Spring.
MAKING COMPLEX DEClSlONS
Introduction to Discrete Event Simulation Customer population Service system Served customers Waiting line Priority rule Service facilities Figure C.1.
Introduction to Operation Research
1 Performance Evaluation of Computer Networks: Part II Objectives r Simulation Modeling r Classification of Simulation Modeling r Discrete-Event Simulation.
Planning and Verification for Stochastic Processes with Asynchronous Events Håkan L. S. Younes Carnegie Mellon University.
Instructor: Spyros Reveliotis homepage: IE6650: Probabilistic Models Fall 2007.
Course Summary. Characterization of the operations in contemporary SC’s and the area of SC management A SC chain consists of all the required stages for.
Instructor: Spyros Reveliotis homepage: IE7201: Production & Service Systems Engineering Fall.
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Introduction A GENERAL MODEL OF SYSTEM OPTIMIZATION.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
ISyE 8803D Formal Methods in Operations Engineering Instructor Spyros Reveliotis Spring 2007.
Instructor: Spyros Reveliotis homepage: IE7201: Production & Service Systems Engineering Fall.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
Tactical Planning in Healthcare with Approximate Dynamic Programming Martijn Mes & Peter Hulshof Department of Industrial Engineering and Business Information.
By: Messias, Spaan, Lima Presented by: Mike Plasker DMES – Ocean Engineering.
Carnegie Mellon University
ECE 466/658: Performance Evaluation and Simulation Introduction Instructor: Christos Panayiotou.
Generalized stochastic Petri nets (GSPN)
Petri Nets Lecturer: Roohollah Abdipour. Agenda Introduction Petri Net Modelling with Petri Net Analysis of Petri net 2.
Decision-Theoretic Planning with Asynchronous Events Håkan L. S. Younes Carnegie Mellon University.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
Lecture 1 – Operations Research
Regularization and Feature Selection in Least-Squares Temporal Difference Learning J. Zico Kolter and Andrew Y. Ng Computer Science Department Stanford.
Question paper 1997.
Unit 7 An Introduction to Exponential Functions 5 weeks or 25 days
Dynamic Programming Discrete time frame Multi-stage decision problem Solves backwards.
Instructor: Spyros Reveliotis homepage: IE7201: Production & Service Systems Engineering Fall.
Some Introductory Remarks on Operations Scheduling.
1 ECE 517: Reinforcement Learning in Artificial Intelligence Lecture 21: Dynamic Multi-Criteria RL problems Dr. Itamar Arel College of Engineering Department.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
A Formalism for Stochastic Decision Processes with Asynchronous Events Håkan L. S. YounesReid G. Simmons Carnegie Mellon University.
A Structured Solution Approach for Markov Regenerative Processes Elvio G. Amparore 1, Peter Buchholz 2, Susanna Donatelli 1 1 Dipartimento di Informatica,
Csci 418/618 Simulation Models Dr. Ken Nygard, IACC 262B
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
© 2015 McGraw-Hill Education. All rights reserved. Chapter 17 Queueing Theory.
Building Valid, Credible & Appropriately Detailed Simulation Models
Identifying Mathematical Knowledge for Teaching at the Secondary Level (6-12) from the Perspective of Practice Joint NSF-CLT Conference on Curriculum,
Lecture 20 Review of ISM 206 Optimization Theory and Applications.
Traffic Simulation L2 – Introduction to simulation Ing. Ondřej Přibyl, Ph.D.
Some Introductory Remarks on Operations Scheduling
IE7201: Production & Service Systems Engineering Spring 2017
Analytics and OR DP- summary.
Exploiting Graphical Structure in Decision-Making
Computer Systems Performance Evaluation
Dr. Unnikrishnan P.C. Professor, EEE
Markov Decision Problems
Computer Systems Performance Evaluation
Presentation transcript:

Instructor: Spyros Reveliotis IE7201: Production & Service Systems Engineering Fall 2009 Closure

Course Objectives Provide an understanding and appreciation of the different resource allocation and coordination problems that underlie the operation of production and service systems. Enhance the student ability to formally characterize and study these problems by referring them to pertinent analytical abstractions and modeling frameworks. Develop an appreciation of the inherent complexity of these problems and the resulting need of simplifying approximations. Systematize the notion and role of simulation in the considered problem contexts. Define a “research frontier” in the addressed areas.

Course Outline 1.Introduction: Course Objectives, Context, and Outline –Contemporary organizations and the role of Operations Management (OM) –Corporate strategy and its connection to operations –The organization as a resource allocation system (RAS) –Classification of Production Systems on the basis of their workflow structure and control policies –The underlying RAS management problems and the need for understanding the impact of the underlying stochasticity –The basic course structure 2.Modeling and Analysis of Production and Service Systems as Continuous-Time Markov Chains –(A brief overview of the key results of the theory of Markov Chain and Continuous- Time Markov Chains (CT-MC)) –Birth-Death Processes and the M/M/1 Queue Transient Analysis Steady State Analysis –Modeling more complex behavior through CT-MCs Single station systems with multi-stage processing, finite resources and/or blocking effects Open (Jackson) and Closed (Gordon-Newell) Queueing networks Gershwin’s Models for Transfer Line Analysis Bucket Brigades

Course Outline (cont.) 3.Accommodating non-Markovian behavior –Phase-type distributions and their role as approximating distributions –The M/G/1 queue –The G/M/1 queue –The G/G/1 queue –The essence of “Factory Physics” –(BCMP networks) 4.Performance Control of Production and Service systems –Controlling the “event rates” of the underlying CT-MC model (an informal introduction of the dual Linear Programming formulation in standard MDP theory) –A brief introduction of the theory of Markov Decision Processes (MDPs) and of Dynamic Programming (DP) –An introduction to Approximate DP –An introduction to dispatching rules and classical scheduling theory –Buffer-based priority scheduling policies, Meyn and Kumar’s performance bounds and stability theory

Course Outline (cont.) 5. Behavioral Control of Production and Service Systems –Behavioral modeling and analysis of Production and Service Systems –Resource allocation deadlock and the need for liveness-enforcing supervision (LES) –Petri nets as a modeling and analysis tool –A brief introduction to the behavioral control of Production and Service Systems

What (I hope that) we eventually achieved A solid exposition of –DTMC theory –Properties of exponential distributions and Poisson processes –CTMC and Semi-Markov theory –Classical queueing theory –(An introduction to reversibility theory and its implications) Demonstration of the application of the aforementioned theory in the modeling and performance evaluation of production and service systems –Bucket Brigades –Factory Physics type of models –The “method of stages” for modeling non-Markovian behavior through Markovian models –Other examples presented in class and in reading assignments (e.g., excerpt from Gershwin) –Homework problems Establishing some proficiency for working with the aforementioned stochastic models –Indentify / recognize cases accepting exact modeling –“Fit” more general situations / behaviors to the presented queueing models for some approximating analysis –Go back to (semi-)Markov models using the method of stages, and maybe some approximating techniques on the resulting Markov model to deal with the complexity problems arising from the “curse of dimensionality”

Towards performance control… In case of a small number of alternatives, use an appropriate (descriptive) model to evaluate the performance of each alternative. For cases where the system performance can be explicitly expressed as a function of the parameter of interest, formulate and solve an appropriate optimization (mathematical programming) problem. Markov Decision Processes (MDP): –Transition probabilities out of each state are functions of decisions / actions taken at these states. –Selections of decisions to be followed at each state (and visit), constitute a policy. A policy can also be deterministic or randomized. –A functional defined over the sequence of visited states and performed actions at each state characterizes the performance of the policy instantiation. –The expected value of this functional over all the possible sample paths generated by the considered policy, when starting from a certain initial state, defines the policy value at this state. –In an MDP setting we want to identify a policy that optimizes the value of each state. –The state function providing their values under an optimal policy is known as the value function of the MDP problem. –Dynamic Programming (DP) is a set of methods that tries to compute the value function of a given MDP problem by building upon the fundamental remark that the value of a (state, action) pair at a certain time / period, can be decomposed as an immediate value for taking the considered action at the given state in the current period, plus the expected value of the resulting state. The above decomposition is the essential content of the, so called, Bellman’s equation. –Availability of the value function reduces the problem of computing an optimal decision for any given state, as a local optimization problem over the entire set of actions available at that state. –Classical DP methods are limited by the very large size of the underlying state spaces, that renders intractable even the enumeration of the entire value function! These complexity problems are collectively known as the “curse of dimensionality”. –Approximate DP (ADP) seeks to deal with the curse of dimensionality problems by working with an approximate compact representation of the value function. This approach necessitates the development of a representational space able to support effective compact approximations of the problem value function; techniques for “fitting” the aforementioned representation to the particular value function corresponding to any given problem instantiation.

Petri nets and behavioral control of RAS s1 s2 s3 c1 c2 c3 w1 f1 w2 f2 w3 f3 t1 t2 t3 t4 t5 t6 11 22 33 Petri nets offer an unambiguous and compact representation of DES structure and behavior. Reachability / Coverability analysis is the systematic construction of the underlying state space through an efficient enumeration of all the possible behaviors generated by the system. The availability of the reachable state space enables an assessment of various qualitative properties of the system behavior (like boundedness, liveness, deadlock-freedom, fairness, etc.) For many PN sub-classes, the assessment of many such qualitative properties can be performed through much more efficient analytical / algebraic techniques that exploit the information on the system structure and its invariants provided by the PN model itself. In the context of DES modeling the resource allocation taking place in production and service systems, an important issue is the maintenance of live behavior by preventing the formation of deadlock. Generalized Stochastic PNs allow the modeling of the passage of time by distinguishing between instantaneous and timed transitions. Timed transitions involve a delay in their firing, which is drawn from an exponential distribution. Their timed-based dynamics corresponds to a semi-Markov process defined on the underlying state space. A (generalized stochastic) Petri net modeling the CQN of Problem 8.15 in Homework #3

Thanks for being in the class and Have a Great Holiday!