Download presentation
Presentation is loading. Please wait.
Published byJonathan Sutton Modified over 9 years ago
1
Networks and Distributed Systems CSE 490h This presentation contains content licensed under the Creative Commons Attribution 2.5 License.
2
Outline Networking Remote Procedure Calls (RPC) Transaction Processing Systems Failure & Reliability
3
Fundamentals of Networking
4
Sockets: The Internet = tubes? A socket is the basic network interface Provides a two-way “pipe” abstraction between two applications Client creates a socket, and connects to the server, who receives a socket representing the other side
5
Ports Within an IP address, a port is a sub-address identifying a listening program Allows multiple clients to connect to a server at once
6
Example: Web Server (1/3) The server creates a listener socket attached to a specific port. 80 is the agreed-upon port number for web traffic.
7
Example: Web Server (2/3) The client-side socket is still connected to a port, but the OS chooses a random unused port number When the client requests a URL (e.g., “www.google.com”), its OS uses a system called DNS to find its IP address.
8
Example: Web Server (3/3) Server chooses a randomly-numbered port to handle this particular client Listener is ready for more incoming connections, while we process the current connection in parallel
9
What makes this work? Underneath the socket layer are several more protocols Most important are TCP and IP (which are used hand-in-hand so often, they’re often spoken of as one protocol: TCP/IP) Even more low-level protocols handle how data is sent over Ethernet wires, or how bits are sent through the air using 802.11 wireless…
10
IP: The Internet Protocol Defines the addressing scheme for computers Encapsulates internal data in a “packet” Does not provide reliability Just includes enough information for the data to tell routers where to send it
11
TCP: Transmission Control Protocol Built on top of IP Introduces concept of “connection” Provides reliability and ordering
12
Why is This Necessary? Not actually tube-like “underneath the hood” Unlike phone system (circuit switched), the packet switched Internet uses many routes at once
13
Networking Issues If a party to a socket disconnects, how much data did they receive? … Did they crash? Or did a machine in the middle? Can someone in the middle intercept/modify our data? Traffic congestion makes switch/router topology important for efficient throughput
14
Remote Procedure Calls (RPC)
15
How RPC Doesn’t Work Regular client-server protocols involve sending data back and forth according to a shared state Client: Server: HTTP/1.0 index.html GET 200 OK Length: 2400 (file data) HTTP/1.0 hello.gif GET 200 OK Length: 81494 …
16
Remote Procedure Call RPC servers will call arbitrary functions in dll, exe, with arguments passed over the network, and return values back over network Client: Server: foo.dll,bar(4, 10, “hello”) “returned_string” foo.dll,baz(42) err: no such function …
17
Possible Interfaces RPC can be used with two basic interfaces: synchronous and asynchronous Synchronous RPC is a “remote function call” – client blocks and waits for return val Asynchronous RPC is a “remote thread spawn”
18
Synchronous RPC
19
Asynchronous RPC
20
Asynchronous RPC 2: Callbacks
21
Wrapper Functions Writing rpc_call(foo.dll, bar, arg0, arg1..) is poor form Confusing code Breaks abstraction Wrapper function makes code cleaner bar(arg0, arg1); //just write this; calls “stub”
22
More Design Considerations Who can call RPC functions? Anybody? How do you handle multiple versions of a function? Need to marshal objects How do you handle error conditions? Numerous protocols: DCOM, CORBA, JRMI…
23
(break)
24
Transaction Processing Systems (We’re using the blue cover sheets on the TPS reports now…)
25
TPS: Definition A system that handles transactions coming from several sources concurrently Transactions are “events that generate and modify data stored in an information system for later retrieval” * * http://en.wikipedia.org/wiki/Transaction_Processing_System
26
Reliability Demands Support partial failure Total system must support graceful decline in application performance rather than a full halt
27
Reliability Demands Data Recoverability If components fail, their workload must be picked up by still-functioning units
28
Reliability Demands Individual Recoverability Nodes that fail and restart must be able to rejoin the group activity without a full group restart
29
Reliability Demands Consistency Concurrent operations or partial internal failures should not cause externally visible nondeterminism
30
Reliability Demands Scalability Adding increased load to a system should not cause outright failure, but a graceful decline Increasing resources should support a proportional increase in load capacity
31
Reliability Demands Security The entire system should be impervious to unauthorized access Requires considering many more attack vectors than single-machine systems
32
Ken Arnold, CORBA designer: “Failure is the defining difference between distributed and local programming”
33
Component Failure Individual nodes simply stop
34
Data Failure Packets omitted by overtaxed router Or dropped by full receive-buffer in kernel Corrupt data retrieved from disk or net
35
Network Failure External & internal links can die Some can be routed around in ring or mesh topology Star topology may cause individual nodes to appear to halt Tree topology may cause “split” Messages may be sent multiple times or not at all or in corrupted form…
36
Timing Failure Temporal properties may be violated Lack of “heartbeat” message may be interpreted as component halt Clock skew between nodes may confuse version-aware data readers
37
Byzantine Failure Difficult-to-reason-about circumstances arise Commands sent to foreign node are not confirmed: What can we reason about the state of the system?
38
Malicious Failure Malicious (or maybe naïve) operator injects invalid or harmful commands into system
39
Preparing for Failure Distributed systems must be robust to these failure conditions But there are lots of pitfalls…
40
The Eight Design Fallacies The network is reliable. Latency is zero. Bandwidth is infinite. The network is secure. Topology doesn't change. There is one administrator. Transport cost is zero. The network is homogeneous. -- Peter Deutsch and James Gosling, Sun Microsystems
41
Dealing With Component Failure Use heartbeats to monitor component availability “Buddy” or “Parent” node is aware of desired computation and can restart it elsewhere if needed Individual storage nodes should not be the sole owner of data Pitfall: How do you keep replicas consistent?
42
Dealing With Data Failure Data should be check-summed and verified at several points Never trust another machine to do your data validation! Sequence identifiers can be used to ensure commands, packets are not lost
43
Dealing With Network Failure Have well-defined split policy Networks should routinely self-discover topology Well-defined arbitration/leader election protocols determine authoritative components Inactive components should gracefully clean up and wait for network rejoin
44
Dealing With Other Failures Individual application-specific problems can be difficult to envision Make as few assumptions about foreign machines as possible Design for security at each step
45
Key Features of TPS: ACID “ACID” is the acronym for the features a TPS must support: Atomicity – A set of changes must all succeed or all fail Consistency – Changes to data must leave the data in a valid state when the full change set is applied Isolation – The effects of a transaction must not be visible until the entire transaction is complete Durability – After a transaction has been committed successfully, the state change must be permanent.
46
Atomicity & Durability What happens if we write half of a transaction to disk and the power goes out?
47
Logging: The Undo Buffer 1. Database writes to log the current values of all cells it is going to overwrite 2. Database overwrites cells with new values 3. Database marks log entry as committed If db crashes during (2), we use the log to roll back the tables to prior state
48
Consistency: Data Types Data entered in databases have rigorous data types associated with them, and explicit ranges Does not protect against all errors (entering a date in the past is still a valid date, etc), but eliminates tedious programmer concerns
49
Consistency: Foreign Keys Database designers declare that fields are indices into the keys of another table Database ensures that target key exists before allowing value in source field
50
Isolation Using mutual-exclusion locks, we can prevent other processes from reading data we are in the process of writing When a database is prepared to commit a set of changes, it locks any records it is going to update before making the changes
51
Faulty Locking Locking alone does not ensure isolation! Changes to table A are visible before changes to table B – this is not an isolated transaction
52
Two-Phase Locking After a transaction has released any locks, it may not acquire any new locks Effect: The lock set owned by a transaction has a “growing” phase and a “shrinking” phase
53
Relationship to Distributed Comp At the heart of a TPS is usually a large database server Several distributed clients may connect to this server at points in time Database may be spread across multiple servers, but must still maintain ACID
54
Conclusions We’ve seen 3 layers that make up a distributed system Designing a large distributed system involves engineering tradeoffs at each of these levels Appreciating subtle concerns at each level requires diving past the abstractions, but abstractions are still useful in general
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.