1 Models of Computation o Turing Machines o Finite store + Tape o transition function: o If in state S1 and current cell is a 0 (1) then write 1 (0) and move head left (right) and transition to state S2 o Halting problem: undecidable!!!
2 Models of Computation o Turing machine variants do not add computational power: o Multiple tapes, multiple heads, 2- directional infinite tapes etc. o RAM machines: same as Turing, except that a MOVE can bring the head to an arbitrary position on the tape => computers – Von Neuman architecture
A simple Machine Language MEM[I] – memory ACC STORE I LOAD I LOADC X ADD I IFZ Label
Computability vs. Expressiveness Computabilty: can we compute something? Expressiveness: how easy is to implement something we want to compute?
Uniform and Non-Uniform models A computation defined by a Turing Machine is uniform – solve problems of any size CIRCUITS: non-uniform models Example: 16-bit adder, 32-bit adder – will not add (without extra tricks) 2 64-bit numbers
Finite Functions as building blocks Finite Functions (in particular finite sets) 32-bit, 64-bit words are finite functions to {0,1} Arrays are finite functions Strings are arrays – therefore finite functions Structures and Fields: (name->value) are FF Functions about Functions => A Turing Equivalent Model: Lambda Calculus
Sparseness String on V: finite function (from 0..n) to V V an alphabet, V* all strings on V A set S included in V* is sparse iff it has a ‘small’ number of strings of any given length N – where small means something like ‘a polynomial number of’ – or something else, depending on the relevant complexity class Sparseness has a profound effect on computation models and computer architecture. Reading/writing a memory word at a time is efficient because, in most problems, relatively few words need to be changed at any given time => Von Neuman computer, ‘CPU’ etc.
Programming Languages deal with sparse sets Intuitively – only a small set of possible combinations of syntactic elements are meaningful and end up being used in an actual programming language Various conjectures (mostly by Hartmanis) state that if there are (complexity-wise) hard sparse sets than something in the complexity hierarchy collapses (i.e. Mahaney’s theorem) Efficient representation of sparse sets => A solution: hashing – represent large structures with small ones (small integers) knowing that only ‘a few’ of them will be used => dictionaries => symbol tables => memory allocators => object and code sharing mechanisms
Kolmogorov-Chaitin algorithmic complexity Size of the smallest program that generates a set of strings Undecidable Does not depend on the programming language Related to compressibility: random sequences are harder to compress than regular sequences