Download presentation
Presentation is loading. Please wait.
1
Ivan Lanese Computer Science Department University of Bologna/INRIA Italy On the Expressive Power of Primitives for Compensation Handling Joint work with Catia Vaz and Carla Ferreira
2
Error handling here?
3
Well…
4
Error handling l Many possible errors/unexpected events –Even in Cyprus –Even more in concurrent and distributed systems l Possible sources of errors –Received data may not have the expected format –Communication partners may disconnect –Communication may be unreliable –…–… l A fault is an abnormal situation that forbids the continuation of an activity l Faults should be managed so that the whole system reaches a consistent state
5
Compensation handling l Managing errors requires to undo previously completed activities l Undoing can not be perfect –Some activities can not be undone –Impossible to lock resources for long times l The programmer defines some code (the handler) to take the system to a consistent state l Handlers are associated to long-running transactions –Computations that either succeed or are compensated –Weaker requirement w.r.t. ACID transactions
6
Map of the talk l Comparing primitives for compensations l A hierarchy of calculi l Encoding parallel recovery l An impossibility result l Conclusions
7
Map of the talk l Comparing primitives for compensations l A hierarchy of calculi l Encoding parallel recovery l An impossibility result l Conclusions
8
Different primitives have been proposed l Different calculi and languages provide primitives for fault and compensation handling –BPEL, Sagas, StAC, cjoin, SOCK, dcπ, webπ, … l Are the proposed primitives equivalent? l Which are the best ones?
9
A difficult problem l Approaches to compensation handling can differ according to many features –Flat vs nested transactions –Automatic vs programmed kill of subtransactions –Static vs dynamic definition of compensations l Approaches applied to different underlying languages –Differences between the languages may hide differences between the primitives
10
Our approach l Taking the simplest possible calculus (π-calculus) l Adding different primitives to it l Comparing their expressive power looking for compositional encodings l Try to export the results to the original calculi l Too many possible differences l We concentrate on static vs dynamic definition of handlers –Other differences will be considered in future work
11
Static approach l The error recovery code is fixed –Java try P catch e Q –Whenever a fault is triggered inside P code Q is executed l This is the approach of Java, Webπ, πt-calculus, conversation calculus l In general, recovery should depend on the computation done till now l Possible approaches –Use nested try-catch blocks »More complex code –Or Q has to check the state to understand when the fault happened »Need for auxiliary variables, race conditions problem
12
Dynamic approach l The error recovery code can be updated during the computation –Requires a specific primitive for doing the update l Parallel recovery: new error recovery processes can be added in parallel –This is the approach of dcπ and the approach of Sagas and StAC for parallel activities l General dynamic recovery: a (higher-order) function can be applied to the error recovery code –This is the approach of SOCK –BPEL, Sagas and StAC use backward recovery for sequential activities »It is a particular form of general dynamic recovery
13
Map of the talk l Comparing primitives for compensations l A hierarchy of calculi l Encoding parallel recovery l An impossibility result l Conclusions
14
P ::= 0 inaction Σ i π i.P i guarded choice !π.P guarded replication P|Q parallel composition (νx)P restriction t[P,Q] transaction protected block X process variable inst[λX.Q].P compensation update A hierarchy of calculi π ::= a(x) a
15
l Transactions can compute l Transactions can be killed l Transactions can commit suicide l Protected code is protected Simple examples: static compensations a h b ij t [ a ( x ) : x : 0 ; Q ] ! 0 j t [ b : 0 ; Q ] t j t [ a : 0 ; Q ] ! h Q i t [ t : 0 j a : 0 ; Q ] ! h Q i t [ t : 0 jh a : 0 i ; Q ] ! h a : 0 ijh Q i
16
l Parallel update l Sequential update (backward) l Compensation deletion Simple examples: compensation update t [ i ns t b ¸ X : P j X c. a : 0 ; Q ] ! t [ a : 0 ; P j Q ] t [ i ns t b ¸ X : b : X c. a : 0 ; Q ] ! t [ a : 0 ; b : Q ] t [ i ns t b ¸ X : 0 c. a : 0 ; Q ] ! t [ a : 0 ; 0 ]
17
Race conditions l Should never happen that an action has been performed but the corresponding compensation update has not been done l Otherwise in case of fault the compensation is not updated l Compensation update should have priority w.r.t. normal actions
18
A hierarchy of calculi l General dynamic recovery l Parallel recovery –All compensation updates have the form λX. Q|X l Static recovery –Compensation updates are never used l General dynamic recovery is more expressive than parallel recovery l Parallel recovery and static recovery have the same expressive power
19
Map of the talk l Comparing primitives for compensations l A hierarchy of calculi l Encoding parallel recovery l An impossibility result l Conclusions
20
Encoding parallel update [[ t [ P ; Q ]]] p 2 s = ( ºr ) t [[[ P ]] p 2 s ; [[ Q ]] p 2 s j r ] [[ i ns t b ¸ X : Q j X c : P ]] p 2 s = [[ P ]] p 2 s jh r : ([[ Q ]] p 2 s j r ) i l Other constructs are mapped homomorphically to themselves l Each transaction has an associated name r l Compensations are stored in the body, protected and guarded by r l Output on r is added to the static compensation and regenerated by stored compensations
21
Example of the encoding
22
Sample execution ( ºr ) t £ b oo k : h r : ( un b oo k j r ) ij pay : h r : ( re f un d j r ) i ) ; 0 j r ] b oo k ¡¡¡ ! ( ºr ) t £ h r : ( un b oo k j r ) ij pay : h r : ( re f un d j r ) i ) ; 0 j r ] pay ¡¡ ! ( ºr ) t £ h r : ( un b oo k j r ) ijh r : ( re f un d j r ) i ) ; 0 j r ] t ¡ ! ( ºr ) h r : ( un b oo k j r ) ijh r : ( re f un d j r ) i ) jh r i ¿ ¡ ! ( ºr ) h r : ( un b oo k j r ) ijh ( re f un d j r ) i ) ¿ ¡ ! ( ºr ) h ( un b oo k j r ) ijh re f un d i ) un b oo k ¡¡¡¡ ! ( ºr ) h ( r ) ijh re f un d i ) re f un d ¡¡¡¡¡ ! ( ºr ) h ( r ) ijh 0 i )
23
Properties of the encoding l The encoding is defined by structural induction on the term l The process to be encoded is weakly bisimilar to its encoding –For processes that do not install compensations at top-level l The encoding does not introduce divergency
24
Map of the talk l Comparing primitives for compensations l A hierarchy of calculi l Encoding parallel recovery l An impossibility result l Conclusions
25
Conditions for compositional encoding 1. Parallel composition mapped into parallel composition 2. Well-behaved w.r.t. substitutions 3. Transactions implemented by some fixed context lWith transaction name as a parameter 4. Process to be encoded should testing equivalent to its encoding lOnly for well-formed processes lWeaker than asking weak bisimilarity 5. Divergency not introduced
26
Are the conditions reasonable? l These or similar conditions have been proposed in the literature [Gorla, Palamidessi] l Testing equivalence only for well-formed processes –Processes that do not install compensations outside transactions –Otherwise those compensations can be observed –Those compensations can never be executed l Sanity check: our previous encoding satisfies these properties
27
Impossibility result l There is no compositional encoding of general dynamic recovery into static recovery l Idea of the proof –With general dynamic recovery it is possible to understand the order of execution of parallel actions by looking at their compensations –With static or parallel recovery this is not possible l The process has a trace a,b,t,b’ but no trace a,b,t,a’ l This behaviour can not be obtained using static recovery t [ a : i ns t b ¸ X : a 0 : 0 cj b : i ns t b ¸ X : b 0 : 0 c ; 0 ]
28
Additional results l Asynchronous calculi –The impossibility result can be extended –One must require bisimilarity preservation instead of should testing preservation »Difficult to observe the order of actions otherwise l Backward recovery –Easily definable in a calculus with sequential composition –Even allowing to add a prefix in front of the old compensation is enough for the impossibility (λX.π.X)
29
Map of the talk l Comparing primitives for compensations l A hierarchy of calculi l Encoding parallel recovery l An impossibility result l Conclusions
30
Summary l A formaliztion of three different forms of recovery –Static, parallel and dynamic l An encoding of parallel recovery into static l A separation result between those two and dynamic recovery l What about calculi in the literature?
31
Exporting our results to other calculi Underlying language Compens. definition Protection operator Encoding applicable Impossib. applicable DcπAsynch. πParallelYes Asynch. Web πAsynch. πStaticImplem.YesAsynch. πtπtAsynch. πStaticNo Asynch. CjoinJoinStaticNoYes*No COWS-StaticYes No SOCK-DynamicImplem.YesNo Jolie-DynamicImplem.YesNo WS-BPEL-StaticImplem.YesNo
32
Future work l Many questions still open –Nested vs flat –What about BPEL-style recovery? –What about c-join and calculi with priority? –…–… l We think that a similar approach can be used to answer them
33
End of talk
34
Application: dcπ l Dcπ is an asynchronous pi-calculus with parallel recovery l Dcπ can be seen as a fragment of our calculus with parallel update of compensations l The encoding works also in the asynchronous case, thus dcπ can be mapped into its static fragment
35
Application: webπ and webπ ∞ l Webπ ∞ is an asynchronous fragment of our calculus with static recovery l It is not possible to implement general dynamic recovery on top of it l It is possible to implement parallel recovery l Webπ has timed transactions, which add an orthogonal expressiveness dimension
36
Application: c-join l C-join is a calculus with static recovery based on join –Also some features of parallel recovery, since transactions can be merged l Join patterns are more expressive than pi-calculus communication l We conjecture that this gives the additional power required to implement general dynamic recovery
37
Application: Sagas, StAC and BPEL l They use parallel recovery for parallel activities, backward recovery for sequential ones –More than parallel recovery, less than general dynamic recovery –The counterexample used in the impossibility theorem does not apply l Sagas and StAC have no communication, so also observations are different
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.