CSP: Communicating Sequential Processes. Overview Computation model and CSP primitives Refinement and trace semantics Automaton view Refinement checking.

Slides:



Advertisements
Similar presentations
Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Advertisements

1 Turing Machines and Equivalent Models Section 13.2 The Church-Turing Thesis.
Nondeterministic Finite Automata CS 130: Theory of Computation HMU textbook, Chapter 2 (Sec 2.3 & 2.5)
C O N T E X T - F R E E LANGUAGES ( use a grammar to describe a language) 1.
Determinization of Büchi Automata
January 7, 2015CS21 Lecture 21 CS21 Decidability and Tractability Lecture 2 January 7, 2015.
Chair of Software Engineering Concurrent Object-Oriented Programming Prof. Dr. Bertrand Meyer Lecture 11: An introduction to CSP.
Comp 205: Comparative Programming Languages Semantics of Imperative Programming Languages denotational semantics operational semantics logical semantics.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 52 Database Systems I Relational Algebra.
Introduction to Computability Theory
A Semantic Characterization of Unbounded-Nondeterministic Abstract State Machines Andreas Glausch and Wolfgang Reisig 1.
Finite Automata Great Theoretical Ideas In Computer Science Anupam Gupta Danny Sleator CS Fall 2010 Lecture 20Oct 28, 2010Carnegie Mellon University.
1 Introduction to Computability Theory Lecture2: Non Deterministic Finite Automata Prof. Amos Israeli.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture7: PushDown Automata (Part 1) Prof. Amos Israeli.
CS5371 Theory of Computation
Transparency No. 2-1 Formal Language and Automata Theory Chapter 2 Deterministic Finite Automata (DFA) (include Lecture 3 and 4)
61 Nondeterminism and Nodeterministic Automata. 62 The computational machine models that we learned in the class are deterministic in the sense that the.
An Introduction to Input/Output Automata Qihua Wang.
Regular Expression (EXTRA)
Lecture 3 Goals: Formal definition of NFA, acceptance of a string by an NFA, computation tree associated with a string. Algorithm to convert an NFA to.
79 Regular Expression Regular expressions over an alphabet  are defined recursively as follows. (1) Ø, which denotes the empty set, is a regular expression.
Normal forms for Context-Free Grammars
CS5371 Theory of Computation Lecture 4: Automata Theory II (DFA = NFA, Regular Language)
CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG)
Regular Expressions and Automata Chapter 2. Regular Expressions Standard notation for characterizing text sequences Used in all kinds of text processing.
The Game of Algebra or The Other Side of Arithmetic The Game of Algebra or The Other Side of Arithmetic © 2007 Herbert I. Gross by Herbert I. Gross & Richard.
::ICS 804:: Theory of Computation - Ibrahim Otieno SCI/ICT Building Rm. G15.
CMPS 3223 Theory of Computation
Benjamin Gamble. What is Time?  Can mean many different things to a computer Dynamic Equation Variable System State 2.
Basics of automata theory
Nondeterministic Finite Automata CS 130: Theory of Computation HMU textbook, Chapter 2 (Sec 2.3 & 2.5)
DECIDABILITY OF PRESBURGER ARITHMETIC USING FINITE AUTOMATA Presented by : Shubha Jain Reference : Paper by Alexandre Boudet and Hubert Comon.
1 Unit 1: Automata Theory and Formal Languages Readings 1, 2.2, 2.3.
Theory of Languages and Automata
Introduction to CS Theory Lecture 3 – Regular Languages Piotr Faliszewski
© Kenneth C. Louden, Chapter 11 - Functional Programming, Part III: Theory Programming Languages: Principles and Practice, 2nd Ed. Kenneth C. Louden.
SDS Foil no 1 Process Algebra Process Algebra – calculating with behaviours.
Pushdown Automata (PDAs)
CS6133 Software Specification and Verification
Lecture 05: Theory of Automata:08 Kleene’s Theorem and NFA.
Internet Security CSCE 813 Communicating Sequential Processes.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2010.
© Kenneth C. Louden, Chapter 11 - Functional Programming, Part III: Theory Programming Languages: Principles and Practice, 2nd Ed. Kenneth C. Louden.
UW CSE 503 ▪ Software Engineering ▪ Spring 2004 ▪ Rob DeLine1 CSE 503 – Software Engineering Lecture 7: Process calculi and refinement Rob DeLine 19 Apr.
Copyright © Curt Hill Finite State Automata Again This Time No Output.
Lecture 5 1 CSP tools for verification of Sec Prot Overview of the lecture The Casper interface Refinement checking and FDR Model checking Theorem proving.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
CS 203: Introduction to Formal Languages and Automata
Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Donghyun (David) Kim Department of Mathematics and Physics North Carolina Central University 1 Chapter 1 Regular Languages Some slides are in courtesy.
Transparency No. 4-1 Formal Language and Automata Theory Chapter 4 Patterns, Regular Expressions and Finite Automata (include lecture 7,8,9) Transparency.
Copyright © Cengage Learning. All rights reserved. CHAPTER 8 RELATIONS.
Model Checking Lecture 1. Model checking, narrowly interpreted: Decision procedures for checking if a given Kripke structure is a model for a given formula.
Overview of Previous Lesson(s) Over View  A token is a pair consisting of a token name and an optional attribute value.  A pattern is a description.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen Department of Computer Science University of Texas-Pan American.
Chapter 5 Finite Automata Finite State Automata n Capable of recognizing numerous symbol patterns, the class of regular languages n Suitable for.
Finite Automata Great Theoretical Ideas In Computer Science Victor Adamchik Danny Sleator CS Spring 2010 Lecture 20Mar 30, 2010Carnegie Mellon.
Process Algebra (2IF45) Basic Process Algebra Dr. Suzana Andova.
Model Checking Lecture 1: Specification Tom Henzinger.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
Regular Languages Chapter 1 Giorgi Japaridze Theory of Computability.
MA/CSSE 474 Theory of Computation Decision Problems, Continued DFSMs.
Internet Security CSCE 813 Communicating Sequential Processes.
Formal methods: Lecture
Copyright © Cengage Learning. All rights reserved.
Modeling Arithmetic, Computation, and Languages
ISA 763 Security Protocol Verification
Hierarchy of languages
Alternating tree Automata and Parity games
Presentation transcript:

CSP: Communicating Sequential Processes

Overview Computation model and CSP primitives Refinement and trace semantics Automaton view Refinement checking algorithm Failures Semantics 2

CSP Communicating Sequential Processes, introduced by Hoare, Abstract and formal event-based language to model concurrent systems. Belong to the “Process Algebra” family. Elegant, with refinement based reasoning. turnOn turnOff 1C 2C 1W 2W  Senseo = turnOn  Active Active = (turnOff  Senseo) □ (1c  boil  1w  Active) □ (2c  boil  2w  Active) Senseo = turnOn  Active Active = (turnOff  Senseo) □ (1c  boil  1w  Active) □ (2c  boil  2w  Active) 3 boil

References Quick info at Wikipedia. Communicating Sequential Processes, Hoare, Prentice Hall, rd most cited computer science reference Renewed edition by Jim Davies, Available free! Communicating Sequential Processes Model Checking CSP, Roscoe, Model Checking CSP 4

Computation model A concurrent system is made of a set of interacting processes. Each process sequentially produces events. Each event is atomic. Examples: turnOn, turnOff, Play, Reset lockAcquire, lockRelease Some events are internals  not observable from outside. There is no notion of variables, nor data. A process is abstractly decribed by the sequences of events that it produces. 5

Computation model Multiple processes can synchronize on an event, say a. They will wait each other until all synchronizing processes are ready to execute a. Then they will simultaneously execute a. As in : a  STOP || {a} x  a  STOP The 1 st process will have to wait until the 2 nd has produced x. 6

Some notation first Names : A,B,C  alphabets (sets of events) a,b,c  events (actions) P,Q,R  processes Formally for each process we also specify its alphabet, but here we will usually leave this implicit.  P denotes the alphabet of P. 7

CSP constructs We’ll only consider simplified syntax: Process ::= STOP | Event  Process | Process [] Process | Process |¯| Process | Process || Process | Process / Alphabet | ProcessName Process definition: ProcessName “=“ Process 8

STOP, sequence, and recursion Some simple primitives : STOP // as the name says a  P // do a, then behave as P Recursion is allowed, e.g. : Clock = tick  Clock Recursion must be ‘guarded’ (no left recursion thus). 9

Internal choice We also have internal / non-deterministic choice: P |¯| Q, as in : R 1 behave as either: a  P or b  Q but the choice is decided internally by R 1 itself. From outside it is as if R 1 makes a non-deterministic choice. R 1 may therefore deadlock (e.g. the environment only offers a, but R 1 have decided that it wants to do b instead). R 1 = (a  P) |¯| (b  Q ) 10

External choice Denoted by P □ Q Behave as either P or Q. The choice is decided by the environment. Ex: R 2 behaves as either: a  P or b  Q depending on the actions offered by the environment (e.g. think a,b as representing actions by a user to push on buttons). R 2 = (a  P) □ (b  Q) 11

External choice However, it can degenerate to non-deterministic choice: 12 R 3 = (a  P) □ (a  Q)

Parallel composition Denoted by P || Q This denotes the process that behaves as the interleaving of P and Q, but synchronizing them on  P   Q. Example: This produces a process that behaves as either of these : (Notice the interleaving on a 1,a 2 and synchronization on b). R = (a 1  b  STOP ) || ( a 2  b  STOP ) 13 a 1  a 2  b  STOP a 2  a 1  b  STOP

Hiding (abstraction) Denoted by P / A Hide (internalize) the events in A; so that they are not visible to the environment. Example: In particular: (P || Q) / (  P   Q ) is the parallel composition of P and Q, and then we internalize their synchronized events. 14 R = (a 1  b  STOP ) || ( a 2  b  STOP ) R / {b}= (a 1  a 2 ) □ (a 2  a 1 )

Specifications and programs have the same status That is, a specification is expressed by another CSP process : More precisely, when events not in {1c,1w,2c,2w} are abstracted away, our Senseo machine should behave as the above SenseoSpec process. This is expressed by refinement : SenseoSpec = ( 1c  1w) □ ( 2c  2w)  SenseoSpec SenseoSpec  Senseo / { turnOn, turnOff, boil } Cannot be conveniently expressed in temporal logic. Conversely, CSP has no native temporal logic constructs to express properties. Refinement relation: P  Q means that Q is at least as good as P. What this exactly entails depends on our intent. In any case, we usually expect a refinement relation to be preorder 15

Monotonicity A relation  (over A) is a preorder if it is reflexive and transitive : A function F:A  A is monotonic roughly if its value increases if we increase its argument. More precisely it is monotonic wrt to a relation  iff Analogous definition if F has multiple arguments. 1.P  P 2.P  Q and Q  R implies P  R 1.P  P 2.P  Q and Q  R implies P  R P  Q  F(P)  F(Q) 16

Monotonicity & compositionality Suppose we have a preorder  over CSP processes, acting as a refinement relation. A monotonic || would give us this result, which you can use to decompose the verification of a system to component level, and avoiding, in theory, state explosion:  1  P,  2  Q    1 ||    P || Q Many formalisms for concurrent systems do not have this. CSP monotonicity is mainly due to its level of abstraction.   P  express P satisfies the specification  So, can we find a notion of refinement such that all CSP constructs are monotonic ?? 17 (note that this presumes we have the specifications of the components)

Trace Semantics Idea: abstractly consider two processes to be equivalent if they generate the same traces. Introduce traces(P) the set of all finite traces (sequences of events) that P can produce. E.g. traces( a  b  STOP) = { <>,, } Simple semantics of CSP processes But it is oblivious to certain things. Still useful to check safety. Induce a natural notion of refinement. 18

Trace Semantics We can define “traces” inductively over CSP operators. traces STOP = { <> } traces (a  P) = { <> }  { ^ s | s  traces(P) } 19

Trace Semantics If s is a trace, s| A is the trace obtained by throwing away events not in A. Pronounced “s restricted to A”. Example : | {a,c} = Now we can define: traces (P/A) = { s| (  P – A) | s  traces(P) } 20

Trace Semantics If A is an alphabet, A* denote the set of all traces over the events in A. E.g.  {a,b}*, and  {a,b,c}*; but  {b}*. traces (P || Q) = { s | s  (  P   Q)*, s|  P  traces(P) and s|  Q  traces(Q) } 21

Example Consider : P = a 1  b  STOP//  P = {a 1,b} Q = a 2  b  STOP//  Q = {a 2,b} traces(P||Q) = { <>,,,,... } Notice that e.g. : |  P  traces(P) |  Q  traces(Q) 22

Trace Semantics traces(P □ Q) = traces(P)  traces(Q) traces(P |¯| Q) = traces(P)  traces(Q) So in this semantics you can’t distinguish between internal and external choices. 23

Traces of recursive processes Consider P = (a  a  P)  (b  P) How to compute traces(P) ? According to defs: traces(P) = { <>, }  { ^ t | t  traces(P) }  { ^ t | t  traces(P) } Define traces(P) as the smallest solution of the above equation. 24

Trace Semantics We can now define refinement as trace inclusion. Let P, Q be processes over the same alphabet: which implies that Q won’t produce any ‘unsafe trace’ unless P itself can produce it. Moreover, this relation is obviously a preorder. Theorem: P  Q = traces(P)  traces(Q) All CSP operators are monotonic wrt this trace-based refinement relation. 25

Verification Because specification is expressed in terms of refinement :   P verification in CSP amounts to refinement checking. In the trace semantics it amounts to checking: traces(  )  traces(P) We can’t check this directly since the sets of traces are typically infinite. If we view CSP processes as automata, we can do this checking with some form of model checking. 26

Automata semantic Represent CSP process P with an automaton M P that generates the same set of traces. Such an automaton can be systematically constructed from the P’s CSP description. However, the resulting M P may be non-deterministic. Convert it to a deterministic automaton generating the same traces Comparing deterministic automata are easier as we later check refinement. There is a standard procedure to convert to deterministic automaton. Things are however more complicated as we later look at failures semantic. 27

Only finite state processes Some CSP processes may have infinite number of states, e.g. Bird 0 below: Bird 0 = (flyup  Bird 1 )  (eat  Bird 0 ) Bird i+1 = (flyup  Bird i+2 )  (flydown  Bird i ) We will only consider finite state processes. flyup flydown eat... 28

Automaton semantics P = a  b  P P P a b 29 Senseo = turnOn  Select Select = b1  coffee  Select  b2  coffee  coffee  Select Select = b1  coffee  Select  b2  coffee  coffee  Select Senseo Select turnOn b1 b2 coffee

No distinction between ext. and int. choice P = ( a  STOP ) □ ( b  P ) a b P = (a  STOP ) |¯| (b  P ) a b   Internal action, representing internal decision in choosing between a and b. However, since in trace semantics we don’t see the difference between  and |¯| anyway, so for we define their automata to be the same. 30

Converting to deterministic automaton P = ( a  c  STOP ) □ ( a  b  P ) s s u u a a v v c “ □ ” can still lead to an implicit non-determinism. But this should be indistinguishable in the trace semantic, so convert it to a deterministic automaton, essentially by merging end-states with common events. The transformation preserves traces. t t b {s} {t,u} a {v} c b 31

Hiding P / {x,y} : a b a b   convert it to a deterministic version. 32 a b y x P :

Parallel comp. P = a  b  P a b Q = ( b  Q ) □ ( c  STOP ) x x y y c b P || Q, common alphabet is { b } : 0,x 0,y 1,x a c b c 33 1,y a

Checking trace refinement Formally, we will represent a deterministic automaton M by a tuple (S,s 0,A,R), where: SM’s set of states s 0 the initial state Athe alphabet (set of events) ; every transition in M is labeled by an event. R : S  A  pow(S) encoding the transitions in M. Deterministic: R s a is either  or a singleton. Else non-deterministic. “R s a = {t}” means that M can go from state s to t by producing event a. “R s a =  ” means that M can’t produce a when it is in state s. 34

Checking trace refinement Let M P = (S,s 0,A,R) and M Q = (S,t 0,B,S) be deterministic (!) automata representing respectively processes P and Q; they have the same alphabet. We want to check: For s  S, let initials P (s) be the set of P’s possible next events when it is in the state s: Let’s construct M P  M Q  contains all traces which both automata can do. Check the initials of both at each state. 35 traces(P)  traces(Q) initials P (s) = { a | R s a   }

Example x x z z K P : MQ:MQ: MQ:MQ: a a a b b b 0,x 1,y 0,z a b The intersection: y y b 36 initials P (0) = {a} initials Q (x) = {a} initials P (1) = {1} initials Q (y) = {y} initials P (0) = {a} initials Q (z) = {b}

Checking trace refinement The traces of M Q is a subset of M P iff for all (s,t) in M P  M Q we have : initials P (s)  initials Q (t) If at some (s,t) this condition is violated  then uc is a counter example, where u is a trace that leads to the state t, and c is an event in initials Q (t) / initials P (s). This gives you an algorithm to check refinement  construct the intersection automaton, and check the above condition on every state in the intersection.  you can also construct it lazily. 37

Refinement Checking Algorithm checked =  ; pending = { (s 0,t 0 ) } ; while pending   do { get and remove an (s,t) from pending ; if initials(s)  initials(t) then { checked := {(s,t)}  checked } pending := pending  ( { (s’,t’) | (  a. s’  R s a /\ t’  R t a ) } / checked ) ; else error! } 38

More refined semantics? Unfortunately, in trace-based semantics these are equivalent : P = (a  STOP)  (b  STOP) Q = (a  STOP) |¯| (b  STOP) But Q may deadlock when we put it with e.g. E = a  STOP; whereas P won’t. 39

Refusal Suppose  P = {a,b}, then: P = a  STOP will refuse to synchronize over b. Q = (a  STOP)  (b  STOP) will refuse neither a nor b. R = (a  STOP) |¯| (b  STOP) may refuse to sync over a, or b, not over both (if the env can do either a or b, but leave the choice to P). 40

Refusal An offer to P is a set of event choices that the environment (of P) is offerring to P as the first event to synchronize; the choice is up to P. So we define a refusal of P as an offer that P may fail to synchronize (due to internal chocies P may come to a state where it can’t sync over any event in the offer). refusals(P) = the set of all P’s refusals. 41 Q = (a  STOP)  (b  STOP) refusals(Q) = {  } R = (a  STOP) |¯| (b  STOP) refusals(R) = { , {a}, {b} }

Refusals Assuming alphabet A refusals (STOP) = { X | X  A } refusals (a  P) = { X | X  A /\ a  X } 42 refuse any offer that does not include a

Refusals refusals (P [] Q) = refusals(P)  refusals(Q) refusals (P |¯| Q) = refusals(P)  refusals(Q) In the above example: may refuse , {a}, {b} won’t refuse {a,b} 43 P = a ... Q = b ... Assuming alphabet {a,b}

Refusals of || refusals(P || Q) = { X  Y | X  refusals(P) /\ Y  refusals(Q) } 44 P = a ... Q = c ...  P = {a,b,x} P||Q = (a  c ...) [] (c  a ...)  Q = {c,d,x} refusals: { b,x } and all its subsets refusals: {b,d, x } and all its subsets refusals: { d,x } and all its subsets refuse common actions or other Q’s non-common actions.

Refusals of || refusals(P || Q) = { X  Y | X  refusals(P) /\ Y  refusals(Q) } 45 P = x ... Q = x ... P||Q = x ... refusals: {a,b} + subsets refusals: {a,b,c,d} + subsets refusals: {c,d} + subsets  P = {a,b,x}  Q = {c,d,x}

Example What is the refusals of this? Assume {a,b,c} as alphabet. P = ((a  STOP) [] (b  STOP)) |¯| ((b  STOP) [] (c  STOP) ) {b,c} + subsets {a,c} + subsets { , {c} } { , {a} } So, refusals(P) = { , {a}, {c} } 46

Refusals after s Define: Example, with alphabet  P = {a,b} : refusals(P/<>) = refusals(P) refusals(P/ ) = , {a} refusals(P/ )= all substes of  P refusals(P/s) = the refusals of P after producing the trace s. P = (a  P) |¯| (b  b  STOP) 47

“Failures” Define : (s,X) is a ‘failure’ of P means that P can perform s, afterwhich it may deadlock when offered alternatives in X. E.g. (s,  P)  failures(P/s) means after s P may stop. If for all X : (s,X)  failures(P/s)  a  X this implies that after s P cannot refuse a (implying progress!). failures(P) = { (s,X) | s  traces(P), X  refusals(P/s) } 48 Note that due to non- determinism, there may be several possible states where P may end up after doing s.

Example Consider this P with  P = {a,b} : P’s failures : ( , {a}), ( , {b}), ( ,  ) (a, {a,b})... // and other (a,X) where X is a subset of {a,b} (b, {a,b})... // and other (b,X) where X is a subset of {a,b} Notice the “closure” like property in X and s. P = (a  STOP ) |¯| (b  STOP) 49

Failures Refinement We can use failures as our semantics, and define refinement as follows. Let P and Q to have the same alphabet. Also a preorder! And it implies trace-refinement, since: So, it follows that P  Q implies traces(P)  traces(Q). P  Q = failures(P)  failures(Q) 50 traces(P) = { s | (s,  )  failures(P) }

Back to automata again As before we want to use automata to check refinement. However now we can’t just remove non-determinism, because it does matter in the failures semantic: a b a b   s s t t a a {s} {s,t} a a Notice that the transformation, although it preserves traces, it does not preserve refusals. 51

Back to automata Still, deterministic automata are attractive because we have seen how we can check trace inclusion. Furthermore, in a deterministic automaton, the end-state u after producing a trace s is unique. Now remember that a ‘failure’ is a pair of (trace,refusal). Since a trace is identified uniquely by its end-state. this suggests a strategy to label the states with its refusals. Then we can adapt our trace-based refinement checking algorithm to also check failures. 52

Example P = a  (( b  P ) |¯| (a  B)) B = b  B Q = a  b  (Q |¯| STOP) Assuming {a,b} as alphabet. So, is P  Q ? a a b b   , { a } , { b } , { a } , { b } , { a }, { b } P: , { a } , { b } , { a }, { b } a a b b P: 53

Example Q = a  b  (Q |¯| STOP) P  Q ? , { a } , { b } , { a }, { b } a a b b P: , { b } , { a } a b Q:   all subsets of {a,b} , { a } a b Q: a all subsets of {a,b} , { b } 54

But... The procedure doesn’t work well with e.g. : 55 ((a  STOP) |¯| (b  STOP)) [] (c  STOP)   a b c {b,c} {a,c} {a,b} ??   a b c {b} {a} {a}, {b} c

Normalizing CSP processes Normalize your CSP description so that each process has this form: P = (a  Q 1 ) [] (b  Q 2 ) []... |¯| (e  R 1 ) [] (f  R 2 ) [] When building the automaton representing such a process, each state either: has outgoing arrows which are all tau-steps has outgoing arrows which are all non-tau. 56 // a,b,... distinct // e,f,... distinct P  (Q |¯| R) = (P  Q) |¯| (P  R) P |¯| (Q  R) = (P |¯| Q)  (P |¯| R)

Example P = (a  STOP) [] (( b  P ) |¯| (a  B)) B = b  B , { a } , { b } all subsets of {a,b} , { b } , { a } , { b } all subsets of {a,b} 0,1,3 2,4,5,6 6 a b b b P = ((a  STOP) [] ( b  P )) |¯| (a  (STOP |¯| B)) a b b   P: 3 5 a    After normalizing: 57

Example So, is P  Q, where Q = a  b  (Q |¯| STOP) ? , { a } , { b } all subsets of {a,b} 0,1,3 2,4,5,6 6 a b b b P: , { a } a b Q: a all subsets of {a,b} , { b } 58

Some notes For the sake of simplicity, the algorithm explained here deviates from the original in Roscoe: It’s not necessary to normalize the ‘implementation’ side. Roscoe still normalize the specification side. We also ignore “divergence”. In the worst case, normalization may produce a process whose size is exponential wrt the original. In practice it’s usually not that bad. Specification side is usually much simpler than the implimentation side. 59