CS162 Operating Systems and Systems Programming Lecture 25 P2P Systems and Review November 28, 2012 Ion Stoica

Slides:



Advertisements
Similar presentations
P2P data retrieval DHT (Distributed Hash Tables) Partially based on Hellerstein’s presentation at VLDB2004.
Advertisements

Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Peer to Peer and Distributed Hash Tables
P2P Systems and Distributed Hash Tables Section COS 461: Computer Networks Spring 2011 Mike Freedman
CHORD: A Peer-to-Peer Lookup Service CHORD: A Peer-to-Peer Lookup Service Ion StoicaRobert Morris David R. Karger M. Frans Kaashoek Hari Balakrishnan Presented.
Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M
Chord: A scalable peer-to- peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashock, Hari Balakrishnan.
Peer-to-Peer Distributed Search. Peer-to-Peer Networks A pure peer-to-peer network is a collection of nodes or peers that: 1.Are autonomous: participants.
CS162 Operating Systems and Systems Programming Lecture 23 HTTP and Peer-to-Peer Networks April 20, 2011 Ion Stoica
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
Topics in Reliable Distributed Systems Lecture 2, Fall Dr. Idit Keidar.
Introduction to Peer-to-Peer (P2P) Systems Gabi Kliot - Computer Science Department, Technion Concurrent and Distributed Computing Course 28/06/2006 The.
Looking Up Data in P2P Systems Hari Balakrishnan M.Frans Kaashoek David Karger Robert Morris Ion Stoica.
Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications Stoica et al. Presented by Tam Chantem March 30, 2007.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Peer-to-Peer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
Chord-over-Chord Overlay Sudhindra Rao Ph.D Qualifier Exam Department of ECECS.
CS162 Operating Systems and Systems Programming Lecture 24 DHTs and Cloud Computing April 25, 2011 Ion Stoica
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
1 EE 122: Overlay Networks and p2p Networks Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks.
Peer To Peer Distributed Systems Pete Keleher. Why Distributed Systems? l Aggregate resources! –memory –disk –CPU cycles l Proximity to physical stuff.
EE 122: A Note On Joining Operation in Chord Ion Stoica October 20, 2002.
Peer-to-Peer Networks Slides largely adopted from Ion Stoica’s lecture at UCB.
Winter 2008 P2P1 Peer-to-Peer Networks: Unstructured and Structured What is a peer-to-peer network? Unstructured Peer-to-Peer Networks –Napster –Gnutella.
CS162 Operating Systems and Systems Programming Key Value Storage Systems November 3, 2014 Ion Stoica.
INTRODUCTION TO PEER TO PEER NETWORKS Z.M. Joseph CSE 6392 – DB Exploration Spring 2006 CSE, UT Arlington.
Introduction Widespread unstructured P2P network
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Introduction of P2P systems
CS162 Operating Systems and Systems Programming Lecture 23 Peer-to-Peer Systems April 18, 2011 Anthony D. Joseph and Ion Stoica
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
1 Peer-to-Peer Technologies Seminar by: Kunal Goswami (05IT6006) School of Information Technology Guided by: Prof. C.R.Mandal, School of Information Technology.
Peer to Peer A Survey and comparison of peer-to-peer overlay network schemes And so on… Chulhyun Park
15-744: Computer Networking L-22: P2P. Lecture 22: Peer-to-Peer Networks Typically each member stores/provides access to content Has quickly.
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Distributed Hash.
Peer to Peer Network Design Discovery and Routing algorithms
15-744: Computer Networking L-22: P2P. L -22; © Srinivasan Seshan, P2P Peer-to-peer networks Assigned reading [Cla00] Freenet: A Distributed.
CSE 486/586, Spring 2014 CSE 486/586 Distributed Systems Distributed Hash Tables Steve Ko Computer Sciences and Engineering University at Buffalo.
Algorithms and Techniques in Structured Scalable Peer-to-Peer Networks
CS162 Operating Systems and Systems Programming Lecture 25 Capstone: P2P Systems, Review May 1, 2013 Anthony D. Joseph
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
CS Spring 2014 CS 414 – Multimedia Systems Design Lecture 37 – Introduction to P2P (Part 1) Klara Nahrstedt.
INTERNET TECHNOLOGIES Week 10 Peer to Peer Paradigm 1.
CS162 Operating Systems and Systems Programming Lecture 14 Key Value Storage Systems March 12, 2012 Anthony D. Joseph and Ion Stoica
CS 347Notes081 CS 347: Parallel and Distributed Data Management Notes 08: P2P Systems.
P2P Search COP6731 Advanced Database Systems. P2P Computing  Powerful personal computer Share computing resources P2P Computing  Advantages: Shared.
P2P Search COP P2P Search Techniques Centralized P2P systems  e.g. Napster, Decentralized & unstructured P2P systems  e.g. Gnutella.
CSE 486/586 Distributed Systems Distributed Hash Tables
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Peer-to-peer Systems All slides © IG.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 24 – Introduction to Peer-to-Peer (P2P) Systems Klara Nahrstedt (presented by Long Vu)
Peer-to-Peer Information Systems Week 12: Naming
A Survey of Peer-to-Peer Content Distribution Technologies Stephanos Androutsellis-Theotokis and Diomidis Spinellis ACM Computing Surveys, December 2004.
CS 268: Lecture 22 (Peer-to-Peer Networks)
CSE 486/586 Distributed Systems Distributed Hash Tables
A Scalable Peer-to-peer Lookup Service for Internet Applications
EE 122: Peer-to-Peer (P2P) Networks
CS 268: Peer-to-Peer Networks and Distributed Hash Tables
DHT Routing Geometries and Chord
nthony D. Joseph and Ion Stoica
CS 162: P2P Networks Computer Science Division
MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference
Consistent Hashing and Distributed Hash Table
CSE 486/586 Distributed Systems Distributed Hash Tables
A Scalable Peer-to-peer Lookup Service for Internet Applications
Peer-to-Peer Information Systems Week 12: Naming
Presentation transcript:

CS162 Operating Systems and Systems Programming Lecture 25 P2P Systems and Review November 28, 2012 Ion Stoica

Lec /28Ion Stoica CS162 ©UCB Fall 2012 P2P Traffic 2004: some Internet Service Providers (ISPs) claimed > 50% was p2p traffic

Lec /28Ion Stoica CS162 ©UCB Fall 2012 P2P Traffic Today, around 18-20% (North America) Big chunk now is video entertainment (e.g., Netflix, iTunes)

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Peer-to-Peer Systems What problem does it try to solve? –Provide highly scalable, cost effective (i.e., free!) services, e.g., »Content distribution (e.g., Bittorrent) »Internet telephony (e.g., Skype) »Video streaming (e.g., Octoshape) »Computation (e.g., Key idea: leverage “free” resources of users (that use the service), e.g., –Network bandwidth –Storage –Computation

Lec /28Ion Stoica CS162 ©UCB Fall 2012 How Did it Start? A killer application: Napster (1999) –Free music over the Internet Use (home) user machines to store and distribute songs Internet

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Model Each user stores a subset of files Each user has access (can download) files from all users in the system A B C D E F

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Main Challenge Find a “good” node storing a specified file By “good” we mean: –Has correct content –Can get content fast –… A B C D E F E?

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Other Challenges Scale: up to hundred of thousands or millions of machines Dynamicity: machines can come and go at any time Heterogeneity: nodes with widely different resources and connectivity

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Napster Implements a centralized lookup/directory service that maps files (songs) to machines currently in the system How to find a file (song)? –Query the lookup service  return a machine that stores the required file »Ideally this is the closest/least-loaded machine –Download (ftp/http) the file Advantages: –Simplicity, easy to implement sophisticated search engines on top of a centralized lookup service Disadvantages: –Robustness, scalability (?)

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Napster: Example 1) A client (initiator) contacts directory service to get file “C” 2) Directory service returns a (possible) close by and lightly loaded peer (m5) storing “C” 3) Client contacts directly m5 to get file “C” m3 m4 m5 m6 m7 m8 m9 m2 m1 A B B C C D A: m3 B: m1, m7 C: m5, m8 D: m8 … … C? initiator m5 C? C Directory service

Lec /28Ion Stoica CS162 ©UCB Fall 2012 The Rise and Fall of Napster Founded by Shawn Fanning, John Fanning, and Sean Parker Operated between June 1999 and July 2001 –More than 26 million users (February 2001) Several high profile songs were leaked before being released: –Metallica’s “I Disappear” demo song –Madonna’s “Music” single But, also helped made some bands successful (e.g., Radiohead, Dispatch) (Reemerged as music store in 2008) (Source: ers.svg)

Lec /28Ion Stoica CS162 ©UCB Fall 2012 The Aftermath “Recording Industry Association of America (RIAA) Sues Music Startup Napster for $20 Billion” – December 1999 “Napster ordered to remove copyrighted material” – March 2001 Main legal argument: –Napster owns the lookup service, so it is directly responsible for disseminating copyrighted material

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella (2000) What problem does it try to solve? –Get around the legal vulnerabilities by getting rid of the centralized directory service Main idea: Flood the request to peers in the system to find file

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella (2000) How does request flooding work? –Send request to all neighbors –Neighbors recursively send request to their neighbors –Eventually a machine that has the file receives the request, and it sends back the answer Advantages: –Totally decentralized, highly robust Disadvantages: –Not scalable; the entire network can be swamped with requests (to alleviate this problem, each request has a TTL) »TTL (Time to Leave): request dropped when TTL reaches 0

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella: Time To Live (TTL) When the client (initiator) sends a request, it associates a TTL with the request When a node forwards the request it decrements the TTL When TTL reaches 0, the request is no longer forwarded Typically, Gnutella uses TTL = 7 Example: TTL = 3 TTL = 3TTL = 2TTL = 1TTL = 0 Stop forwarding request initiator

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella: Example Assume a client (initiator) asks for file “C” Assume TTL=2 C C A B B D initiator

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella: Example Initiator send request to its neighbor(s)… … which recursively forward the request to their neighbors At the 3 rd hop request is dropped C C A B B D initiator C ?

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella: Example If node has the requested file it sends a reply back –along the reverse path of the request, or –directly to initiator C C A B B D initiator C ? m m m m

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella: Example Initiator request file “C” from node “m” –Initiator may pick one of several machines if receive multiple replies C C A B B D initiator C m

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Two-Level Hierarchy What problem does it try to solve? –Inefficient search Main idea: organize the p2p system in a two level hierarchy –Flooding happens only at the top level

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Two-Level Hierarchy KaZaa, subsequent versions of Gnutella Leaf nodes are connected to a small number of ultrapeers (supernodes) C A B B D C C D B C m2 m3 m4 m7 m8 m10 m11 m13 m15 m17 Leaf nodes Ultrapeer nodes m1

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Two-Level Hierarchy Each ultra-peer builds a director for the content stored at its peers C A B B D C C D B C m2 m3 m4 m7 m8 m10 m11 m13 m15 m17 B: m2, m3 D: m4 … … B: m7 D: m8 … … C: m10, m11 … … C: m13 … … A: m15 C: m17 … … Leaf nodes Ultrapeer nodes m1

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella: Example Query: A leaf sends query to its ultrapeers If ultrapeer has requested content in its directory, the ultrapeer replies immediately C A B B D initiator C C D B C m2 m3 m4 m7 m8 m10 m11 m13 m15 m17 B: m2, m3 D: m4 … … B: m7 D: m8 … … C: m10, m11 … … C: m13 … … A: m15 C: m17 … … m2 B?

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Gnutella: Example Query: A leaf sends query to its ultrapeers If ultrapeer doesn’t have content in its directory, the ultrapeer floods other ultrapeers C A B B D initiator C C D B C m2 m3 m4 m7 m8 m10 m11 m13 m15 m17 B: m2, m3 D: m4 … … B: m7 D: m8 … … C: m10, m11 … … C: m13 … … A: m15 C: m17 … … m15 A? m15

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Example: Oct 2003 Crawl on Gnutella Ultrapeer nodes Leaf nodes

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Recall: Distributed Hash Tables (DHTs) Distribute (partition) a hash table data structure across a large number of servers –Also called, key-value store Two operations –put(key, data); // insert “data” identified by “key” –data = get(key); // get data associated to “key” key, value …

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Recall: DHTs (cont’d) Project 4: puts and gets are serialized through a master –Master knows all nodes (slaves) in the system –Master maintains mapping between keys and nodes –Simple but doesn’t scale for large, dynamic p2p systems Next: an efficient decentralized lookup protocol –Many proposals: CAN, Chord, Pastry, Tapestry, Kademlia, … –Used in practice, e.g., eDonkey (based on Kademlia)

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Recall: DHTs (cont’d) Lookup service: given a key (ID), map it to node n n = lookup(key); Can invoke put() and get() at any node m m.put(key, data) { n = lookup(key); // get node “n” mapping “key” n.store(key, data); // store data at node “n” } data = m.get(key) { n = lookup(key); // get node “n” storing data associated to “key” return n.retrieve(key); // get data stored at “n” associated to “key” }

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Chord Lookup Service Associate to each node and item a unique key in an uni- dimensional space 0..2 m -1 –Partition this space across N machines –Each key is mapped to the node with the smallest ID larger than the key (consistent hashing) Design approach: decouple correctness from efficiency Properties –Routing table size (# of other nodes a node needs to know about) is O(log(N)), where N is the number of nodes –Guarantees that a file is found in O(log(N)) steps

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Identifier to Node Mapping Example (Consistent hashing) m = 6; ID range Node 8 maps [5,8] Node 15 maps [9,15] Node 20 maps [16, 20] … Node 4 maps [59, 4] Each node maintains a pointer to its successor

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Lookup Each node maintains pointer to its successor Route lookup(key) to the node responsible for key using successor pointers E.g., node=4 lookups for node responsible for key= lookup(37) node=44 is responsible for key=37

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Stabilization Procedure Periodic operation performed by each node n to maintain its successor when new nodes join the system n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; // if x better successor, update succ.notify(n); // n tells successor about itself n.notify(n’) if (pred = nil or n’ (pred, n)) pred = n’; // if n’ is better predecessor, update n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; // if x better successor, update succ.notify(n); // n tells successor about itself n.notify(n’) if (pred = nil or n’ (pred, n)) pred = n’; // if n’ is better predecessor, update

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  Node with key=50 joins the ring  Node 50 needs to know at least one node already in the system -Assume known node is 15 succ=4 pred=44 succ=nil pred=nil succ=58 pred=35

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=50 sends join(50) to node 15  n=44 returns node 58  n=50 updates its successor to 58 join(50) succ=4 pred=44 succ=nil pred=nil succ=58 pred=35 58 succ=58

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=50 executes stabilize()  n’s successor (58) returns x = 44 pred=nil succ=58 pred=35 x=44 succ=4 pred=44 n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; succ.notify(n); succ=58

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=50 executes stabilize()  x = 44  succ = 58 pred=nil succ=58 pred=35 succ=4 pred=44 n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; succ.notify(n); succ=58

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=50 executes stabilize()  x = 44  succ = 58  n=50 sends to it’s successor (58) notify(50) pred=nil succ=58 pred=35 succ=4 pred=44 n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; succ.notify(n); succ=58 notify(50)

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=58 processes notify(50)  pred = 44  n’ = 50 pred=nil succ=58 pred=35 succ=4 pred=44 n.notify(n’) if (pred = nil or n’ (pred, n)) pred = n’ succ=58 notify(50)

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=58 processes notify(50)  pred = 44  n’ = 50  set pred = 50 pred=nil succ=58 pred=35 succ=4 pred=44 n.notify(n’) if (pred = nil or n’ (pred, n)) pred = n’ succ=58 notify(50) pred=50

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=44 runs stabilize()  n’s successor (58) returns x = 50 pred=nil succ=58 pred=35 succ=4 pred=50 n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; succ.notify(n); succ=58 x=50

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=44 runs stabilize()  x = 50  succ = 58 pred=nil succ=58 pred=35 succ=4 pred=50 n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; succ.notify(n); succ=58

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=44 runs stabilize()  x = 50  succ = 58  n=44 sets succ=50 pred=nil succ=58 pred=35 succ=4 pred=50 n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; succ.notify(n); succ=58 succ=50

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=44 runs stabilize()  n=44 sends notify(44) to its successor pred=nil succ=50 pred=35 succ=4 pred=50 n.stabilize() x = succ.pred; if (x (n, succ)) succ = x; succ.notify(n); succ=58 notify(44)

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=50 processes notify(44)  pred = nil pred=nil succ=50 pred=35 succ=4 pred=50 n.notify(n’) if (pred = nil or n’ (pred, n)) pred = n’ succ=58 notify(44)

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation  n=50 processes notify(44)  pred = nil  n=50 sets pred=44 pred=nil succ=50 pred=35 succ=4 pred=50 n.notify(n’) if (pred = nil or n’ (pred, n)) pred = n’ succ=58 notify(44) pred=44

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Joining Operation (cont’d)  This completes the joining operation! succ=58 succ=50 pred=44 pred=50

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Achieving Efficiency: finger tables ( ) mod 2 7 = 16 0 Say m=7 ith entry at peer with id n is first peer with id >= i ft[i] Finger Table at

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Details Lookup complexity O( log N ) –Every hop the distance to target is at least halved To improve robustness each node maintains the k (> 1) immediate successors instead of only one successor Successor S of a node N can send its K-1 successors to N during N’s stabilize() procedure

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Announcements Final code due: Thursday, Dec 6, 11:59pm Final design document and group evaluation due: Friday, Dec 7, 11:59pm Final exam: Thursday, Dec 13, 8-11am –Close books –Double face cheat sheet –Comprehensive, but greater focus on material since midterm (30% / 70%) Final Review: Wednesday, Dec 5, 6-9pm, 60 Evans Hall

Lec /28Ion Stoica CS162 ©UCB Fall min Break

Lec /28Ion Stoica CS162 ©UCB Fall 2012 P2P Summary The key challenge of building wide area P2P systems is a scalable and robust directory service Solutions –Naptser: centralized location service –Gnutella: broadcast-based decentralized location service –CAN, Chord, Tapestry, Pastry: intelligent-routing decentralized solution »Guarantee correctness

Lec /28Ion Stoica CS162 ©UCB Fall 2012 CS162: Summary OS functions: –Manage system resources –Provide services: storage, networking, … –Provide a VM abstraction to processes/users: give illusion to each process/user that is using a dedicated machine Challenges –Virtualize system resources »Virtual Memory (VM): address translation, demand paging »CPU scheduling –Arbitrate access to resources and data »Concurrency control, synchronization »Deadlock prevention, detection

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Key Concept: Synchronization Allow multiple processes to share data Why it is challenging? –Want high utilization: need fine grain sharing –Avoid non-determinism Many primitives/mechanisms –Locks, Semaphores, Monitors (condition variables) Many examples: –Producer-consumer (bounded buffer, flow control) –Reade/Writer problem –Transactions Most likely concept you’ll use in your job

Lec /28Ion Stoica CS162 ©UCB Fall 2012 OS is Evolving Vast majority of apps are distributed today –E.g., mail, Facebook/Twitter, Skype, Google docs, … More and more OSes integrate remote services –E.g., iOS (iCloud), Chrome OS, Windows 8 One example in this class (project 4): reliable and consistent key-value store –Give you taste of challenges of building a distributed system –Why hard? »Nodes can fail: may lose data, render service unavailable »Network can get congested or partitioned: slow/unavailable service »Scale: a p2p network can consists of million of nodes

Lec /28Ion Stoica CS162 ©UCB Fall 2012 Conclusion OS inherently covers many topics –More an more services migrate into OS (e.g., networking, search) If you want to focus on some of these topics –Database class (CS 186) –Networking class (EE 122) –Security class (CS 161) –Software engineering class (CS 169) If you want to focus on OS –New upper-level OS class, CS 194 (John Kubiatowicz), Spring 2013 –Undergraduate research projects in the AMP Lab »Akaros and Mesos projects