Introduction to Programming Concepts (VRH )

Slides:



Advertisements
Similar presentations
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Programming Techniques Accumulators, Difference Lists (VRH ) Carlos Varela.
Advertisements

C. Varela; Adapted from S. Haridi and P. Van Roy1 Declarative Programming Techniques Lazy Execution (VRH 4.5) Carlos Varela RPI Adapted with permission.
D u k e S y s t e m s Time, clocks, and consistency and the JMM Jeff Chase Duke University.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Computation Model From kernel to practical language (CTM 2.6) Exceptions (CTM.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Computation Model Defining practical programming languages Carlos Varela RPI.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Computation Model Memory management, Exceptions, From kernel to practical language.
C. Varela1 Typing and Parameter Passing Dynamic and Static Typing (EPL 4.1-4, VRH 2.8.3) Parameter Passing Mechanisms (VRH 6.4.4) Carlos Varela RPI.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Computation Model Single assignment store Kernel language syntax Carlos Varela.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Data Abstraction and State Abstract Data Types (3.7) State/Data Abstraction(6.1, 6.3, 6.4.1,
C. Varela; Adapted w. permission from S. Haridi and P. Van Roy1 Introduction to Programming Concepts Carlos Varela RPI Adapted with permission from: Seif.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Data Abstraction, State, and Objects Abstract Data Types (VRH 3.7) State / Data Abstraction(VRH.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Computation Model Kernel language semantics Carlos Varela RPI Adapted with permission.
C. Varela; Adapted from S. Haridi and P. Van Roy1 Object-Oriented Programming Active Objects (VRH 7.8) Carlos Varela RPI Adapted with permission from:
C. Varela; Adapted with permission from S. Haridi and P. Van Roy1 Declarative Concurrency (VRH4) Carlos Varela RPI Adapted with permission from: Seif Haridi.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Computation Model Memory management, Exceptions, From kernel to practical language.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Programming Techniques Non-declarative needs (VRH 3.8) Program design in the.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Programming Techniques Declarativeness, iterative computation (VRH 3.1-2) Higher-order.
S. Haridi and P. Van Roy1 Declarative Concurrency Seif Haridi KTH Peter Van Roy UCL.
9/10/2015CS2104, Lecture 81 Programming Language Concepts, COSC Lecture 8 (Multiple) Threads Semantics.
C. Varela; Adapted w/permission from S. Haridi and P. Van Roy1 Declarative Computation Model Memory management (CTM 2.5) Carlos Varela RPI April 6, 2015.
S. Haridi and P. Van Roy1 Lazy Execution Seif Haridi KTH Peter Van Roy UCL.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
C. Varela; Adapted w. permission from S. Haridi and P. Van Roy1 Functional Programming: Lists, Pattern Matching, Recursive Programming (CTM Sections ,
S. Haridi and P. Van Roy1 Concurrency and State Seif Haridi KTH Peter Van Roy UCL.
Memory Consistency Models. Outline Review of multi-threaded program execution on uniprocessor Need for memory consistency models Sequential consistency.
S. Haridi and P. Van Roy1 Introduction to Programming Seif Haridi KTH Peter Van Roy UCL.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Introduction to OOP CPS235: Introduction.
Fall 2008Programming Development Techniques 1 Topic 20 Concurrency Section 3.4.
C. Varela1 Logic Programming (CTM ) Constraint Programming: Constraints and Computation Spaces Carlos Varela Rennselaer Polytechnic Institute.
Concurrency and Performance Based on slides by Henri Casanova.
Higher-Order Programming: Iterative computation (CTM Section 3
Real-World Pipelines Idea Divide process into independent stages
Visit for more Learning Resources
Formal methods: Lecture
Functional Programming: Lists, Pattern Matching, Recursive Programming (CTM Sections , 3.2, , 4.7.2) Carlos Varela RPI September 12,
Background on the need for Synchronization
Memory Consistency Models
Higher-Order Programming: Iterative computation (CTM Section 3
Carlos Varela RPI September 22, 2017 Adapted with permission from:
Memory Consistency Models
Lecture 25 More Synchronized Data and Producer/Consumer Relationship
State, Object-Oriented Programming Explicit State, Polymorphism (CTM 6
Declarative Programming Techniques Accumulators (CTM 3. 4
Carlos Varela Rensselaer Polytechnic Institute December 5, 2017
Chapter 5: Process Synchronization
Algorithm Analysis CSE 2011 Winter September 2018.
Lecture 5: GPU Compute Architecture
Declarative Computation Model Kernel language semantics (Non-)Suspendable statements (VRH ) Carlos Varela RPI October 11, 2007 Adapted with.
Testing UW CSE 160 Spring 2018.
Basics Combinational Circuits Sequential Circuits Ahmad Jawdat
Lecture 5: GPU Compute Architecture for the last time
CS005 Introduction to Programming
Object-Oriented Programming Encapsulation Control/Visibility (VRH 7. 3
Threads and Memory Models Hal Perkins Autumn 2009
Higher-Order Programming: Closures, procedural abstraction, genericity, instantiation, embedding. Control abstractions: iterate, map, reduce, fold, filter.
Declarative Concurrency (CTM 4)
Background and Motivation
Declarative Programming Techniques Accumulators (CTM 3. 4
Memory Consistency Models
Declarative Computation Model Single assignment store (VRH 2
CSE 153 Design of Operating Systems Winter 19
Introduction to Programming Concepts
Chapter 6: Synchronization Tools
State, Object-Oriented Programming Explicit State, Polymorphism (CTM 6
Declarative Programming Techniques Accumulators (CTM 3. 4
ECE 352 Digital System Fundamentals
Threads CSE451 Andrew Whitaker TODO: print handouts for AspectRatio.
C++ Object Oriented 1.
Presentation transcript:

Introduction to Programming Concepts (VRH 1.9-1.17) Carlos Varela RPI September 27, 2007 Adapted with permission from: Seif Haridi KTH Peter Van Roy UCL C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Introduction An introduction to programming concepts Declarative variables Functions Structured data (example: lists) Functions over lists Correctness and complexity Lazy functions Higher-order programming Concurrency and dataflow State, objects, and classes Nondeterminism and atomicity C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

Higher-order programming Assume we want to write another Pascal function, which instead of adding numbers, performs exclusive-or on them It calculates for each number whether it is odd or even (parity) Either write a new function each time we need a new operation, or write one generic function that takes an operation (another function) as argument The ability to pass functions as arguments, or return a function as a result is called higher-order programming Higher-order programming is an aid to build generic abstractions C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Variations of Pascal Compute the parity Pascal triangle fun {Xor X Y} if X==Y then 0 else 1 end end 1 1 1 1 1 1 1 2 1 1 1 1 3 3 1 1 1 1 1 1 4 6 4 1 1 1 C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

Higher-order programming fun {GenericPascal Op N} if N==1 then [1] else L in L = {GenericPascal Op N-1} {OpList Op {ShiftLeft L} {ShiftRight L}} end fun {OpList Op L1 L2} case L1 of H1|T1 then case L2 of H2|T2 then {Op H1 H2}|{OpList Op T1 T2} else nil end fun {Add N1 N2} N1+N2 end fun {Xor N1 N2} if N1==N2 then 0 else 1 end end fun {Pascal N} {GenericPascal Add N} end fun {ParityPascal N} {GenericPascal Xor N} C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Concurrency How to do several things at once Concurrency: running several activities each running at its own pace A thread is an executing sequential program A program can have multiple threads by using the thread instruction {Browse 99*99} can immediately respond while Pascal is computing thread P in P = {Pascal 21} {Browse P} end {Browse 99*99} C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Dataflow What happens when multiple threads try to communicate? A simple way is to make communicating threads synchronize on the availability of data (data-driven execution) If an operation tries to use a variable that is not yet bound it will wait The variable is called a dataflow variable X Y Z U * * + C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Dataflow (II) declare X thread {Delay 5000} X=99 End {Browse ‘Start’} {Browse X*X} Two important properties of dataflow Calculations work correctly independent of how they are partitioned between threads (concurrent activities) Calculations are patient, they do not signal error; they wait for data availability The dataflow property of variables makes sense when programs are composed of multiple threads declare X thread {Browse ‘Start’} {Browse X*X} end {Delay 5000} X=99 C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy State How to make a function learn from its past? We would like to add memory to a function to remember past results Adding memory as well as concurrency is an essential aspect of modeling the real world Consider {FastPascal N}: we would like it to remember the previous rows it calculated in order to avoid recalculating them We need a concept (memory cell) to store, change and retrieve a value The simplest concept is a (memory) cell which is a container of a value One can create a cell, assign a value to a cell, and access the current value of the cell Cells are not variables declare C = {NewCell 0} {Assign C {Access C}+1} {Browse {Access C}} C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Example Add memory to Pascal to remember how many times it is called The memory (state) is global here Memory that is local to a function is called encapsulated state declare C = {NewCell 0} fun {FastPascal N} {Assign C {Access C}+1} {GenericPascal Add N} end C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Objects declare local C in C = {NewCell 0} fun {Bump} {Assign C {Access C}+1} {Access C} end Functions with internal memory are called objects The cell is invisible outside of the definition declare fun {FastPascal N} {Browse {Bump}} {GenericPascal Add N} end C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Classes A class is a ’factory’ of objects where each object has its own internal state Let us create many independent counter objects with the same behavior fun {NewCounter} local C Bump in C = {NewCell 0} fun {Bump} {Assign C {Access C}+1} {Access C} end Bump C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Classes (2) fun {NewCounter} local C Bump Read in C = {NewCell 0} fun {Bump} {Assign C {Access C}+1} {Access C} end fun {Read} [Bump Read] Here is a class with two operations: Bump and Read C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

Object-oriented programming In object-oriented programming the idea of objects and classes is pushed farther Classes keep the basic properties of: State encapsulation Object factories Classes are extended with more sophisticated properties: They have multiple operations (called methods) They can be defined by taking another class and extending it slightly (inheritance) C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Nondeterminism What happens if a program has both concurrency and state together? This is very tricky The same program can give different results from one execution to the next This variability is called nondeterminism Internal nondeterminism is not a problem if it is not observable from outside C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Nondeterminism (2) declare C = {NewCell 0} thread {Assign C 1} end thread {Assign C 2} end t0 C = {NewCell 0} cell C contains 0 t1 {Assign C 1} cell C contains 1 t2 {Assign C 2} cell C contains 2 (final value) time C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Nondeterminism (3) declare C = {NewCell 0} thread {Assign C 1} end thread {Assign C 2} end t0 C = {NewCell 0} cell C contains 0 t1 {Assign C 2} cell C contains 2 t2 {Assign C 1} cell C contains 1 (final value) time C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Nondeterminism (4) declare C = {NewCell 0} thread I in I = {Access C} {Assign C I+1} end thread J in J = {Access C} {Assign C J+1} What are the possible results? Both threads increment the cell C by 1 Expected final result of C is 2 Is that all? C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Nondeterminism (5) Another possible final result is the cell C containing the value 1 t0 C = {NewCell 0} t1 declare C = {NewCell 0} thread I in I = {Access C} {Assign C I+1} end thread J in J = {Access C} {Assign C J+1} I = {Access C} I equal 0 t2 J = {Access C} J equal 0 t3 {Assign C J+1} C contains 1 t4 {Assign C I+1} C contains 1 time C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Lessons learned Combining concurrency and state is tricky Complex programs have many possible interleavings Programming is a question of mastering the interleavings Famous bugs in the history of computer technology are due to designers overlooking an interleaving (e.g., the Therac-25 radiation therapy machine giving doses 1000’s of times too high, resulting in death or injury) If possible try to avoid concurrency and state together Encapsulate state and communicate between threads using dataflow Try to master interleavings by using atomic operations C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Atomicity How can we master the interleavings? One idea is to reduce the number of interleavings by programming with coarse-grained atomic operations An operation is atomic if it is performed as a whole or nothing No intermediate (partial) results can be observed by any other concurrent activity In simple cases we can use a lock to ensure atomicity of a sequence of operations For this we need a new entity (a lock) C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Atomicity (2) declare L = {NewLock} lock L then sequence of ops 1 end lock L then sequence of ops 2 end Thread 1 Thread 2 C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy The program declare C = {NewCell 0} L = {NewLock} thread lock L then I in I = {Access C} {Assign C I+1} end lock L then J in J = {Access C} {Assign C J+1} The final result of C is always 2 C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Memoizing FastPascal {FasterPascal N} New Version Make a store S available to FasterPascal Let K be the number of the rows stored in S (i.e. max row is the Kth row) if N is less or equal to K retrieve the Nth row from S Otherwise, compute the rows numbered K+1 to N, and store them in S Return the Nth row from S Viewed from outside (as a black box), this version behaves like the earlier one but faster declare S = {NewStore} {Put S 2 [1 1]} {Browse {Get S 2}} {Browse {Size S}} C. Varela; Adapted w. permission from S. Haridi and P. Van Roy

C. Varela; Adapted w. permission from S. Haridi and P. Van Roy Exercises VRH Exercise 1.6 (page 24) c) Change GenericPascal so that it also receives a number to use as an identity for the operation Op: {GenericPascal Op I N}. For example, you could then use it as: {GenericPascal Add 0 N}, or {GenericPascal fun {$ X Y} X*Y end 1 N} Prove that the alternative version of Pascal triangle (not using ShiftLeft) is correct. Make AddList and OpList commutative. *Write the memoizing Pascal function using the store abstraction (available at store.oz). C. Varela; Adapted w. permission from S. Haridi and P. Van Roy