Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Preliminaries  Rest of the semester, focus on mobile computing  Unfortunately, running out of time, we may need a make up class  Feedback – less networking/more.

Similar presentations


Presentation on theme: "1 Preliminaries  Rest of the semester, focus on mobile computing  Unfortunately, running out of time, we may need a make up class  Feedback – less networking/more."— Presentation transcript:

1 1 Preliminaries  Rest of the semester, focus on mobile computing  Unfortunately, running out of time, we may need a make up class  Feedback – less networking/more computing next run of this class?  Some topics I would like us to discuss in the remaining lectures:  Context-awareness  Location sensitivity  Power Issues  Sensor networks (probably discuss with pervasive computing)  Security (?)

2 2 Satya’s Pervasive Computing Challenges Paper  Time is now for pervasive computing in terms of required underlying technologies  Wireless devices, WLANs, etc..  Paper driving towards Pervasive computing, but stops by to discuss mobile computing  Has a nice organization of the contributions of Mobile Computing research so far  We will use as our outline for the rest of the semester

3 3 Towards Pervasive Computing

4 4 System Support for Mobile Computing – Case Study (CODA)

5 5 Paper highlights  A decentralized distributed file system to be accessed from autonomous workstations  Most of these features were already present in AFS (Andrew File System)  An optimistic mechanism to handle inconsistent updates:  Coda does not prevent inconsistencies, it detects them  Hoarding to allow disconnected operation

6 6 Sources  Disconnected operation in the Coda file system, Kistler and Satya  Coda: A Highly Available File System for a Distributed Workstation Environment, Satya, Kistler, Kumar, Okasaki, Siegel and Steere  Coda Website (I’ve liberally used some of their presentation material)

7 7 Review of Problems in a Mobile Environment  Variable Bandwidth  Disconnected Operation  Limited Power  What are the implications on distributed file system support? (e.g., NFS)

8 8 CODA  DFS with support for disconnected operation. Originally built for Mach 2.6, but has been ported to Linux/Unix  Supports transparency/Failure resilience (failures of servers or network)  Server replication (not discussed in this paper, but easy to understand) protects against server failure  Disconnected operation: ability of a node to continue working when disconnected from server to protect against network failure/disconnection

9 9 Introduction  AFS was a very successful DFS for a campus-sized user community  Even wanted to extend it nationwide but WWW took over instead  CODA extends AFS by  Providing availability through replication  Gracefully integrating the use of AFS with portable computers (disconnected operation)

10 10 Design Rationale  Would like high availability and consistency in the face of failures and disconnections using off the shelf hardware and a unix-like environment  Balance availability, consistency and performance  Availability=many copies, difficult to keep consistency  Design for Scalability:  Use whole file caching  Place functionality at the clients  Avoid system-wide rapid changes  Automate the portable workstation model  Figure out which files you will need at home, make your changes, copy back files that you modified next morning

11 11 Related Aside—Explicit tradeoffs in distributed systems  Traditional databases/filesystems: ACID  A tomicity, C onsistency, I solation, D urability  Brewer’s conjecture: CAP model  C onsistency, A vailability, and P artition Resilience Can’t have all three together!  C, A but no P: Databases cannot provide ACID if network is partitioned  C, P but no A: Databases can maintain ACID, but not be available if partitioned  AP, but no C: If we sacrifice some consistency, we can have availability and Partition resilience

12 12 Design Rationale (cont’d)  Don’t punish strongly connected clients  No write-backs  Don’t make life worse than when disconnected  Do it in the background if you can  When in doubt, seek user advice

13 13 CODA Overview Client Cache is a key to handling disconnections

14 14 Two Mechanisms for High Availability  Server Replication  More difficult to get partitioned from all servers  Consistency?  Disconnected Operation  If no available servers, attempt to work off of the local cache  Consistency?

15 15 Sharing UNIX semantics  Centralized UNIX file systems (and Sprite) provide one-copy semantics  Every modification to every byte of a file is immediately and permanently visible to all clients  AFS uses a laxer model (sometimes referred to as the session semantics)  Coda uses an even laxer model

16 16 AFS-1 semantics  First version of AFS  Revalidated cached file on each open  Propagated modified files when they were closed  If two users on two different workstations modify the same file at the same time, the users closing the file last will overwrite the changes made by the other user

17 17 AFS-2 semantics (I)  AFS-1 required each client to call the server every time it was opening an AFS file  Most of these calls were unnecessary as user files are rarely shared  AFS-2 introduces the callback mechanism  Do not call the server, it will call you!

18 18 AFS-2 semantics (II)  When a client opens an AFS file for the first time, server promises to notify it whenever it receives a new version of the file from any other client  Promise is called a callback  Making the promise relieves the server from having to answer a call from the client every time the file is opened  Significant reduction of server workload

19 19 AFS-2 semantics (III)  Callbacks can be lost!  Client will call the server every tau  minutes to check whether it received all callbacks it should have received  Cached copy is only guaranteed to reflect the state of the server copy up to tau minutes before the time the client opened the file for the last time

20 20 Coda semantics (I)  Client keeps track of subset s of servers it was able to connect the last time it tried  Updates s at least every tau seconds  At open time, client checks it has the most recent copy of file among all servers in s  Guarantee weakened by use of callbacks  Cached copy can be up to tau minutes behind the server copy (for connected servers)  More detail in a second

21 21 Server Replication  Coda provides a single location-transparent namespace to users  Coda name space mapped at granularity of “sub- tree” to individual servers  read-one, write-all approach  Each client has a preferred server  Holds all callback for client  Answers all read requests from client  Client can still check with other servers to find which one has the latest version of a file  Servers probe each other once every few seconds to detect server failures

22 22 First and Second Class Replicas  Server Replication and Disconnected operation are related: Both replicate data to enhance availability  Server Replication: first class copy  Servers are resource rich, physically secure, professionally administered units. They are costly but reliable.  Disconnected Operation: second class copy  Clients can be disconnected, turned off, stolen, hacked …  Replicas on client (cached there) are not permanent and require periodic validation with respect to a first class copy  Cache coherence protocol that balances the quality of a copy vs. the overhead/performance is needed  Disconnection occurs when a second class replica is partitioned from all first class replicas (AVSG is empty)

23 23 Disconnected Operation

24 24 From Caching to Disconnected Operation  Original aim of CODA is to provide a file system that is resilient to network failures  Supporting disconnected operation/mobility was an opportune side effect  First Problem: Updates to the server is synchronous. If disconnected, usually leads to time-out/failure in normal distributed file systems (e.g., NFS)  Coda switches to disconnected mode (transparent to user)  Venus logs updates locally (Client Modification Log/CML)

25 25 Cache Coherence: Optimistic or Pessimistic?  Pessimistic philosophy: make sure that conflicts cannot arise. (What is a conflict?)  E.g., one user allowed to access a file at a time  Optimistic philosophy: allow unconstrained sharing, and then resolve possible conflicts if they arise  An availability vs. consistency tradeoff?

26 26 Pessimistic Control  Consider a policy when one user acquires a file, and owns it until she releases it  Are conflicts possible?  Bad side effects  No one else can access the file, even to read  What if the owner gets disconnected?  Owner disconnection – use timeout (lease)?  What if no one else is using the file?

27 27 Optimistic Control  Anyone can modify anything, data is reintegrated periodically  What if client is disconnected?  Higher chance of conflicts occurring  They do a profile of file system usage statistics  They consider user files, project (shared) files, and system files  System file sharing higher than project sharing!  Less than 0.75% of file mutations to the same file are carried out by different users within a single day  Strong argument for optimistic approach; chance of conflicts is low!

28 28

29 29 Summary  Pessimistic replication control protocols guarantee the consistency of replicated in the presence of any non-Byzantine failures  Typically require a quorum of replicas to allow access to the replicated data  Optimistic replication control protocols allow access in disconnected mode  Tolerate temporary inconsistencies  Promise to detect them later

30 30 Venus Cache Manager

31 31 Coda Cache Coherence (in Venus)  Optimistic replication  Uses callbacks to maintain consistency  When a file is accessed, a local copy is made and a callback is registered with server  If someone else modifies the file, you get a call back from server to invalidate it  Should updates be propagated instead of files invalidated?  Too inefficient  They re-fetch the file when it is needed or when they do the periodic “hoard walk” (more about hoarding later)

32 32 Rapid Cache Validation  Coda - Cache coherency at file level  Volume callbacks  Add volume level checks (/home/students/janedoe/cs527) before file level

33 33 Data Reintegration  Once the mobile is connected again, the updates must be propagated from the cache back to the server  If another client has modified the same file, we have a conflict  Conflict resolution well understood from concurrent versioning systems  Usually, the resolution can be done automatically (each user updated a different part of the file)  Infrequently, human intervention is needed

34 34 Trickle Reintegration  Propagate updates asynchronously, in the background  We don’t want reintegrations to take up all the foreground bandwidth when users connect using a slow link  Used Reintegration barrier to preserve log optimization  Reintegration chunk based on bandwidth  User advice

35 35 Second Problem (Serving cache misses)  While disconnected, impossible to bring in new files or updates from server  CODA deals with this problem using hoarding  User keeps important files in their “hoard database” E.g. You hoard your calendar in Palm You hoard NY Times in Palm You hoard web pages for disconnected access in IE5 You hoard files in laptop that you think will be useful

36 36 Venus Cache manager  Venus operates in three modes Hoarding: –Normal mode of operation when connected Emulating: –On disconnection, logs client accesses in Client Modify Log –Performs log optimizations to reduce log size Reintegrating state –On reconnection, reconciles CML with servers –Application specific conflict resolution  Persistence (being able to restart the machine without losing updates) is supported using “Recoverable Virtual Memory” (RVM)

37 37 Venus  Files in the cache given priority  Based on user input  Based on recent access pattern E.g., a modified file during disconnection will have infinite priority  Cache replacement occurs by picking a low priority victim  Periodically reevaluate priority using “hoard walking”

38 38 Hoard Walking  Occurs periodically (every 10 minutes or when user requests)  Refetch invalidated entries (due to callbacks)  Invalidations to files mark it invalid  Invalidations to directories mark it suspicious (can continue using)  Fetch new files of interest (e.g., when a directory is marked for hoarding and new files have been added to it)

39 39 Hoarding and Cache

40 40 Side-Track  Intelligent file hoarding for mobile computers Carl Tait, Hui Lei, Swarup Acharya and Henry Chang Mobicom ’95  Hoarding: Prefetching and storing objects for access when the network is disconnected or weakly- connected  E.g. Internet Explorer off-line browsing… power point slides for this presentation into the laptop to avoid BU’s spotty network connection

41 41 Problem  Local storage space is limited.  Suppose I have 160 GB worth of files in the connected machine and 60 on laptop  How can I use my 60GB to continue working without disruption?  What if mobile node has 160GB of its own?

42 42 Problem  User input is necessary.  Users should say what they want to be hoarded for disconnected access.  But, users do not know what files they really need. They can tell that they need Powerpoint. But they can’t say that they also need some.dll file that is needed to print the presentation onto a laser printer (for example)

43 43 Approaches to create hoard profiles  Do Nothing:  Run the application that you intend to use on the road. Normal caching mechanism hoards  Problem: every program run is different, just because it worked while connected does not mean that it will work while disconnected

44 44 User Provides information  User Provided Information  Coda uses this approach. The users specify the files that they want to be hoarded.  Most of the time, users do not even know what files they need. Problem of hidden files and critical files that are rarely used

45 45 Spying  Snap-shot spying:  System keeps track of all files access between some markers. You specify and messages to refine files.  Semantic Distance:  Seer system looked at the semantic distance between files to predict related files

46 46 Transparent, analytical spying  Automatically detect working sets for applications and data  Provide generalized bookends to delimit periods of activity  Present convenient bundles to the user using a GUI  Let the users “load the briefcase”

47 47 Detection of working set  Working set defines that set of files that are part of a program execution. This may include data files, shared libraries (DLLs), related applications  Log file accesses to create program execution tree. Subsequent executions of program help refine this tree  Problem is differentiating application vs user data, you want to learn the “application” from my executions of the application, not the files that I operated on

48 48 Differentiate application and data 1. File name extensions (.exe,.bat,.ini etc) 1. Directory inferencing (/usr/include/file.h is related to application, /usr/bob/foo.dat is related to data) 1. Timestamps – Files modified at the same time as application probably belong with application

49 49 Generalized bookends  Measure all files and applications accessed during the time intervals  Generalize data over multiple executions  Presentation and Selection – GUI tools

50 50 Implementation issues  Data orphan problem  Local program accesses remote data  Program orphan problem  Remote program accessing only local data  GUI tools to select files for hoarding

51 51 Discussion  As disks and flash memory become cheaper, smaller and larger, is hoarding needed?

52 52 Performance  Coda is slower than AFS  Using multicast helps reducing this difference


Download ppt "1 Preliminaries  Rest of the semester, focus on mobile computing  Unfortunately, running out of time, we may need a make up class  Feedback – less networking/more."

Similar presentations


Ads by Google