Download presentation
Presentation is loading. Please wait.
1
Internet Indirection Infrastructure
Modified version of Ion Stoica’s talk at ODU Nov 14, 2005
2
Motivations Today’s Internet is built around a unicast point-to-point communication abstraction: Send packet “p” from host “A” to host “B” This abstraction allows Internet to be highly scalable and efficient, but… … not appropriate for applications that require other communications primitives: Multicast Anycast Mobility … Today’s internet is build around a point-to-point communication abstractions. While this simple abstraction allows Internet to be highly scalable and Efficient, it is not appropriate for application that requires other communication primitives such as multicast, anycast, mobility, and so on.
3
Why? Point-to-point communication implicitly assumes there is one sender and one receiver, and that they are placed at fixed and well-known locations E.g., a host identified by the IP address xxx.xxx is located in Berkeley This is because there is a fundamental mismatch between point-to-point communication abstraction and these primitives. In particulr, the point-to-point communication abstraction implicitly assumes that there is only one sender and on receivers an that they are placed at fixed and well-known locations. Multicast, anycast, and mobility violate at least one of these assumptions. With mobility end-hosts do not have fixed locations, with multicast there are more than one receiver and sender.
4
IP Solutions Extend IP to support new communication primitives, e.g.,
Mobile IP IP multicast IP anycast Disadvantages: Difficult to implement while maintaining Internet’s scalability (e.g., multicast) Require community wide consensus -- hard to achieve in practice During the last decade a plethora of solutions have been proposed to implement anycast, multicast, and mobility functionalities. These solutions can be classified into two catgeories: IP solutions and application level solutions. Example of IP solutions are mobile IP, IP multicast, and IP anycast Unfortunately, despite years of research these solutions have yet to be deployed on a large scale. There are many reasons for this. Two of them are
5
Application Level Solutions
Implement the required functionality at the application level, e.g., Application level multicast (e.g., Narada, Overcast, Scattercast…) Application level mobility Disadvantages: Efficiency hard to achieve Redundancy: each application implements the same functionality over and over again No synergy: each application implements usually only one service; services hard to combine To get around these problems recently several application level solutions have been proposed… However, they have several disadvantages. First, it is hard to achieve efficiency…. That is, proposal for application level multicast don’t address mobility and vice-versa. Usually, a new abstraction require to deploy a new overlay.
6
Internet Indirection Infrastructure (i3)
Each packet is associated an identifier id To receive a packet with identifier id, receiver R maintains a trigger (id, R) into the overlay network id data id data Sender id R trigger Receiver (R) R data
7
Service Model API Best-effort service model (like IP)
sendPacket(p); insertTrigger(t); removeTrigger(t) // optional Best-effort service model (like IP) Triggers periodically refreshed by end-hosts ID length: 256 bits
8
Mobility Host just needs to update its trigger as it moves from one subnet to another Receiver (R1) Sender id R2 id R1 Such an indirection layer would support a large number of services. For example, to achieve mobility an end host needs only to update its trigger with the new address when it moves from one Subnet to another. Receiver (R2)
9
Multicast Receivers insert triggers with same identifier
Can dynamically switch between multicast and unicast id data id data R1 data id R1 Receiver (R1) Sender Multicast is strightforward to achieve. The only difference between multicast and unicast is that in the case of the multicast there are more than on hosts inserting triggers with the same ID id R2 R2 data Receiver (R2)
10
Anycast Use longest prefix matching instead of exact matching
Prefix p: anycast group identifier Suffix si: encode application semantics, e.g., location R1 data Receiver (R1) p|a data p|a data p|s1 R1 Sender p|s2 R2 Receiver (R2) p|s3 R3 Receiver (R3)
11
Service Composition: Sender Initiated
Use a stack of IDs to encode sequence of operations to be performed on data path Advantages Don’t need to configure path Load balancing and robustness easy to achieve Transcoder (T) idT,id data idT,id data id data Receiver (R) R data Sender T,id data id R idT T
12
Service Composition: Receiver Initiated
Receiver can also specify the operations to be performed on data Firewall (F) R data id data id data F,R data Receiver (R) Sender Finally, IL supports composable services, I.e., performing on the fly transformation such as transcoding on the data packets as they travel through the network. To achieve this we replace the packet ID with a stack of Ids, where each identifier excepting the last one identifies a transformation to be aplied on packets. The advantage of this solution versus previously proposed solutions is that you don’t need to find and configure the path,(you just insert the Ids in the proper order). Load balancing and robustness are easy to achieve. Just have more servers implementing the same operations. If one fails, the other one will take transparently over. idF F idF,R data id idF,R
13
Quick Implementation Overview
i3 is implemented on top of Chord But can easily use CAN, Pastry, Tapestry, etc Each trigger t = (id, R) is stored on the node responsible for id Use Chord recursive routing to find best matching trigger for packet p = (id, data)
14
Routing
15
Routing Table Chord uses an m bit circular identifier space where 0 follows (2**m) -1 Each identifier ID is mapped on the server with the closest identifier that follows ID on the identifier circle. This server is called successor of ID Each server maintains a routing table of size m. The ith entry in the routing table of server n contains the first server that follows n + 2**(i-1) A server sends the packet to the closest server (finger) in its routing table that precedes ID. It takes O(log N) hops to route a packet to the server storing the best matching trigger for the packet, where N is the number of i3 servers.
16
Routing Example R inserts trigger t = (37, R); S sends packet p = (37, data) An end-host needs to know only one i3 node to use i3 E.g., S knows node 3, R knows node 35 S 2m -1 S 3 send(37, data) 20 [8..20] 7 7 [4..7] 3 [42..3] Chord circle 35 37 R [21..35] 41 41 trigger(37,R) [36..41] 20 send(R, data) 37 R 35 R R
17
Optimization #1: Path Length
Sender/receiver caches i3 node mapping a specific ID Subsequent packets are sent via one i3 node [8..20] [4..7] [42..3] 37 data [21..35] Sender (S) cache node [36..41] R data 37 R Receiver (R)
18
Optimization #2: Triangular Routing
Use well-known trigger for initial rendezvous Exchange a pair of (private) triggers well-located Use private triggers to send data traffic [8..20] [4..7] 30 data 37 [2] [42..3] S [30] 2 S [21..35] Sender (S) [36..41] R data 2 [30] R [2] 30 R 37 R Receiver (R)
19
Examples Heterogeneous multicast Scalable Multicast Load balancing
Proximity
20
Example 1: Heterogeneous Multicast
Sender not aware of transformations S_MPEG/JPEG send(R1, data) send(id, data) Receiver R1 (JPEG) Sender (MPEG) id_MPEG/JPEG S_MPEG/JPEG send((id_MPEG/JPEG, R1), data) Finally, IL supports composable services, I.e., performing on the fly transformation such as transcoding on the data packets as they travel through the network. To achieve this we replace the packet ID with a stack of Ids, where each identifier excepting the last one identifies a transformation to be aplied on packets. The advantage of this solution versus previously proposed solutions is that you don’t need to find and configure the path,(you just insert the Ids in the proper order). Load balancing and robustness are easy to achieve. Just have more servers implementing the same operations. If one fails, the other one will take transparently over. id (id_MPEG/JPEG, R1) id R2 Receiver R2 (MPEG) send(R2, data)
21
Example 2: Scalable Multicast
i3 doesn’t provide direct support for scalable multicast Triggers with same identifier are mapped onto the same i3 node Solution: have end-hosts build an hierarchy of trigger of bounded degree R2 R1 R4 R3 g x (g, data) (x, data)
22
Example 2: Scalable Multicast (Discussion)
Unlike IP multicast, i3 Implement only small scale replication allow infrastructure to remain simple, robust, and scalable Gives end-hosts control on routing enable end-hosts to Achieve scalability, and Optimize tree construction to match their needs, e.g., delay, bandwidth
23
Example 3: Load Balancing
Servers insert triggers with IDs that have random suffixes Clients send packets with IDs that have random suffixes send( ,data) S1 A S1 S2 S2 S3 S3 send( ,data) S4 S4 B
24
Example 4: Proximity Suffixes of trigger and packet IDs encode the server and client locations S2 S3 S1 send( ,data) S2 S3 S1
25
Security: Some Attacks
Eavesdropping Attacker id2 id3 id1 id4 Loop R id R S id A Attacker (A) Victim (V) id3 V Attacker id2 id1 Confluence Attacker id2 id1 id3 Dead-End
26
Possible Defense Measures
End-hosts can use the public triggers to choose a pair of private triggers, and then use these private triggers to exchange the actual data. To keep the private triggers secret, one can use public key cryptography to exchange the private triggers. Instead of inserting the trigger (id, S), the server can insert two triggers, (id, x) and (x, S) , where x is an identifier known only by S . An i3 server can easily verify the identity of a receiver S by sending a challenge to S the first time the trigger is inserted. The challenge consists of a random nonce that is expected to be returned by the receiver. If the receiver fails to answer the challenge the trigger is removed. Each server can put a bound on the number of triggers that can be inserted by a particular end-host. When a trigger that doesn’t point to an IP address is inserted, the server checks whether the new trigger doesn’t create a loop
27
Traffic Isolation Drop triggers being flooded without affecting other triggers Protect ongoing connections from new connection requests Protect a service from an attack on another services Legitimate client (C) Transaction server ID1 V ID2 V Victim (V) Attacker (A) Web server
28
Traffic Isolation Drop triggers being flooded without affecting other triggers Protect ongoing connections from new connection requests Protect a service from an attack on another services Legitimate client (C) Transaction server ID1 V Victim (V) Attacker (A) Web server Traffic of transaction server protected from attack on web server
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.