Choi, Jinyoung Program in cognitive science

Slides:



Advertisements
Similar presentations
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Advertisements

1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture13: Mapping Reductions Prof. Amos Israeli.
Courtesy Costas Busch - RPI1 A Universal Turing Machine.
CPSC 411, Fall 2008: Set 12 1 CPSC 411 Design and Analysis of Algorithms Set 12: Undecidability Prof. Jennifer Welch Fall 2008.
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
1 Turing Machines. 2 The Language Hierarchy Regular Languages Context-Free Languages ? ?
Fall 2004COMP 3351 Turing Machines. Fall 2004COMP 3352 The Language Hierarchy Regular Languages Context-Free Languages ? ?
Courtesy Costas Busch - RPI1 Turing Machines. Courtesy Costas Busch - RPI2 The Language Hierarchy Regular Languages Context-Free Languages ? ?
Turing Machines.
CHAPTER 3 The Church-Turing Thesis
December 8, 2009Theory of Computation Lecture 22: Turing Machines IV 1 Turing Machines Theorem 1.1: Any partial function that can be computed by a Post-
Foundations of Measurement Ch 3 & 4 April 4, 2011 presented by Tucker Lentz April 4, 2011 presented by Tucker Lentz.
Costas Busch - RPI1 Turing Machines. Costas Busch - RPI2 The Language Hierarchy Regular Languages Context-Free Languages ? ?
Fall 2004COMP 3351 A Universal Turing Machine. Fall 2004COMP 3352 Turing Machines are “hardwired” they execute only one program A limitation of Turing.
1 Turing Machines. 2 The Language Hierarchy Regular Languages Context-Free Languages ? ?
Foundations This chapter lays down the fundamental ideas and choices on which our approach is based. First, it identifies the needs of architects in the.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
Artificial Intelligence Chapter 18. Representing Commonsense Knowledge.
A Universal Turing Machine
Chapter 4. Formal Tools for the Analysis of Brain-Like Structures and Dynamics (1/2) in Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata (LBAs) are the same as Turing Machines with one difference: The input string tape space is the.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
1 Turing Machines. 2 The Language Hierarchy Regular Languages Context-Free Languages ? ?
1 Introduction to Turing Machines
“It is impossible to define every concept.” For example a “set” can not be defined. But Here are a list of things we shall simply assume about sets. A.
1 Recursively Enumerable and Recursive Languages.
Recursively Enumerable and Recursive Languages. Definition: A language is recursively enumerable if some Turing machine accepts it.
1 A Universal Turing Machine. 2 Turing Machines are “hardwired” they execute only one program A limitation of Turing Machines: Real Computers are re-programmable.
Ch 2. The Probably Approximately Correct Model and the VC Theorem 2.3 The Computational Nature of Language Learning and Evolution, Partha Niyogi, 2004.
Modeling Arithmetic, Computation, and Languages Mathematical Structures for Computer Science Chapter 8 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesTuring.
Table of Contents 2.4 How Does Scientific Knowledge Developed? Why Do Scientist use Models? What is a system? How are Models of Systems Used? Models as.
Functional Programming
Functional Programming
Chapter 2 Sets and Functions.
A Universal Turing Machine
CSE202: Introduction to Formal Languages and Automata Theory
Relations and Functions
Relations, Functions, and Matrices
Systems Analysis and Design in a Changing World, Fourth Edition
Course Outcomes of Object Oriented Modeling Design (17630,C604)
OVERVIEW OF SYSTEM ANALYS AND DESIGN
Postulates of Quantum Mechanics
CS 326 Programming Languages, Concepts and Implementation
Copyright © Cengage Learning. All rights reserved.
Modeling Arithmetic, Computation, and Languages
Probability and Statistics
Chapter 2 – first half. A Constructive Approach to Models in Self-Modifying Systems in Biology and Cognitive Science, George Kampis. Course: Autonomous.
Theory of Computation Lecture 23: Turing Machines IV
Turing Machines 2nd 2017 Lecture 9.
Knowledge Representation
Discrete Structure II: Introduction
Chapter 9 TURING MACHINES.
Quantum One.
CSCE 411 Design and Analysis of Algorithms
FP Foundations, Scheme In Text: Chapter 14.
Finite Automata Reading: Chapter 2.
Copyright © Cengage Learning. All rights reserved.
Formal Languages, Automata and Models of Computation
Chapter 3 Scatterplots and Correlation.
Copyright © Cengage Learning. All rights reserved.
CSE S. Tanimoto Turing Completeness
Instructor: Aaron Roth
Copyright © Cengage Learning. All rights reserved.
Copyright © Cengage Learning. All rights reserved.
Instructor: Aaron Roth
Chapter 15 Functional Programming 6/1/2019.
COMPILER CONSTRUCTION
Theory of Computation Lecture 23: Turing Machines III
Probability and Statistics
Presentation transcript:

Choi, Jinyoung Program in cognitive science Chapter 2. A Constructive Approach To Models Self-Modifying Systems in Biology and Cognitive Science Ch.2. Choi, Jinyoung Program in cognitive science

Contents 2.7 The Concept of Information Set 2.8 Encoding of Observables into Variables 2.9 The Modelling Relation 2.10 Implications Here is subtitle!!

2.7 The Concept of Information Set Definition Description of specific observations on specific observables in a given time interval We use a notation I. I i to j means observation on specific observables between time step i to time step j Look at the Figure 2.7. I1 means that we observed X1=2 and X2=3 at time step 1. I12 means that we observed X1=2 ,X2=3 at time step 1 and X1=7,X2=2 at time step 2 The definition of the information set is ‘Description of specific observations on specific observables in a given time interval. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.7 The Concept of Information Set Example These are turing machines. They read tape with their heads. Think about the turing machine with one head reads tape length of n. The machine will need n time steps. Therefore the tape will be described as I1n. Let’s think about simple example. Now consider the machine with n heads. It will only take 1 time step to read whole tape. Thus, the tape will be described as I1 © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.7 The Concept of Information Set Complexity theorem and modelling A model needs empirical data and observables. Different models need different data or encoding operations. Information sets directly contribute to the number of encoding operation steps. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.8 Encodings of Observables into Variables maps observables to variables encoding has a role similar to that of the empirical information about the system. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.8 Encodings of Observables into Variables Cases of encoding (1) Dynamic observables and dynamic variables ‘Dynamic’ means they have temporary and transitory nature. they are temporarily available and temporarily observable. In this case, the encoding is simply a list of paired dynamic variables and dynamic observables. They always have a one-by-one correspondence. There are three cases of encoding. The first case is encoding dynamic observables to dynamic variables. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.8 Encodings of Observables into Variables Cases of encoding (2) Dynamic observables and Static variables For example, we can encode the number of antigens in an organism to a time variable. At different time, different antigens are present so there are great number of temporary observations. But we have one static variable that measures their total number. The second case is encoding of dynamic observables to static variables. This case is many-to-one encoding and it is closer to practical modelling. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.8 Encodings of Observables into Variables Cases of encoding (3) Static observable and static variable We can achieve this by defining various dynamic observables by states of a measuring device. For example, we can use thermometer’s state to encode energy state of water. The Third case is encoding of static observables to static variables. This is probably the most familiar case. A measuring device can itself provide a method for labelling the dynamic observables. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.8 Encodings of Observables into Variables Encoding for modelling The structure of the measuring device and the structure of the studied process must be kept unrelated. (otherwise, best measuring device will be dynamics itself) Encodings must be done separately from dynamics © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.9 The Modelling Relation Natural systems (N) Domain of reality delimited by interaction Cannot be defined by a description or a model often being defined by natural entities. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.9 The Modelling Relation Abstraction Subset of natural system narrowing down interest from a totality of potentially measurable qualities to the subset which will be studied Dynamic observables can be not available at the time when we form the abstraction. So the focus can only be defined as a phenomenal domain. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.9 The Modelling Relation Description Frame (d-frame) unified notion to thefactors that influence model building. Link between description D to a natural system N abstraction, information set, encoding, postulates © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.9 The Modelling Relation Overall scheme of the modelling relation A model is not just a description. A model is a description plus a description frame together. Figure 2.16. is overall scheme of the modelling relation. © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

2.9 The Modelling Relation Interpretation First, we need to decode variables into observables. And then, information set used by the description has to be identified. Lastly, postulates such as time or causality need to be identified. The correspondence of a description to a natural system is not a straightforward matter. Models cannot be read out directly. They have to be interpreted. Interpretation is an intricate process that may involve also formal elements. Figure 2.17. shows elements of interpretation © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

© 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 2.10 Implications Relevance of Models Which model is better between equally valid descriptions? adequacy : questions asked by the observer can be encoded into the description interpretable : model is possible to assign an interpretation to it and the interpretation is acceptable. Relevant : adequacy + interpretable -> good model © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

© 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 2.10 Implications Relevance of Models Which model is better between equally valid descriptions? adequacy : questions asked by the observer can be encoded into the description interpretable : model is possible to assign an interpretation to it and the interpretation is acceptable. Relevant : adequacy + interpretable -> good model © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

© 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 2.10 Implications Reduction and Equivalence reduction : One model can be mapped to another weak(mathematical) equivalence : mathematical descriptions are mathematically identical strong equivalence : weak equivalence + equivalence of interpretations strongly equivalent descriptions can replace one another © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr

Questions & Disussions © 2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr