Access Strategy Junxiao Shi, 2016-06-29.

Slides:



Advertisements
Similar presentations
NDN in Local Area Networks Junxiao Shi The University of Arizona
Advertisements

Interest NACK Junxiao Shi, Introduction Interest NACK, aka "negative acknowledgement", is sent from upstream to downstream to inform that.
資 管 Lee Lesson 12 IPv6 Mobility. 資 管 Lee Lesson Objectives Components of IPv6 mobility IPv6 mobility messages and options IPv6 mobility data structures.
PIT AGGREGATION Marc Mosko, Nacho Solis, J.J. Garcia-Luna-Aceves ICNRG Iterim (Dallas, TX) March 23, 2015.
Internet Networking Spring 2006 Tutorial 12 Web Caching Protocols ICP, CARP.
1 Internet Networking Spring 2002 Tutorial 10 TCP NewReno.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #13 Web Caching Protocols ICP, CARP.
E-ODMRP: Enhanced ODMRP with Motion Adaptive Refresh Soon Y. Oh, Joon-Sang Park, Mario Gerla Computer Science Dept. UCLA.
IPv6 Mobility David Bush. Correspondent Node Operation DEF: Correspondent node is any node that is trying to communicate with a mobile node. This node.
1 Internet Networking Spring 2004 Tutorial 10 TCP NewReno.
NFD forwarding pipelines Junxiao Shi,
NFD forwarding pipelines Junxiao Shi,
Forwarding Hint in NFD Junxiao Shi,
Network Layer4-1 NAT: Network Address Translation local network (e.g., home network) /24 rest of.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking BGP, Flooding, Multicast routing.
Dead Nonce List Junxiao Shi Bug 1953: persistent loop with short InterestLifetime A B C Interest Nonce=204 lifetime=150 delay=100 delay=20.
Access Strategy Junxiao Shi, Problem 2.
NFD forwarding pipelines Junxiao Shi,
IP Forwarding.
IP Multicast Lecture 3: PIM-SM Carl Harris Communications Network Services Virginia Tech.
Interest NACK Junxiao Shi, Introduction Interest NACK, aka "negative acknowledgement", is sent from upstream to downstream to inform that.
NFD forwarding pipelines Junxiao Shi,
Interest NACK Junxiao Shi, Introduction Interest NACK, aka "negative acknowledgement", is sent from upstream to downstream to inform that.
NFD Permanent Face Junxiao Shi, Outline what is a permanent face necessity and benefit of having permanent faces guarantees provided by.
NFD Tunnel Authentication Junxiao Shi,
Duplicate Suppression on Multicast Face Junxiao Shi
Understanding IPv6 Slide: 1 Lesson 12 IPv6 Mobility.
TELE202 Lecture 6 Routing in WAN 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »Packet switching in Wide Area Networks »Source: chapter 10 ¥This Lecture.
Lab The network simulator ns The network simulator ns Allows us to watch evolution of parameters like cwnd and ssthresh Allows us to watch evolution of.
TCP Timeout and Retransmission
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Principles of reliable data transfer 0.
NFD forwarding pipelines Junxiao Shi,
NFD forwarding pipelines Junxiao Shi,
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
TCP over Wireless PROF. MICHAEL TSAI 2016/6/3. TCP Congestion Control (TCP Tahoe) Only ACK correctly received packets Congestion Window Size: Maximum.
NFD forwarding pipelines Junxiao Shi,
William Stallings Data and Computer Communications
Network Layer.
TCP - Part II.
Transmission Control Protocol (TCP) Retransmission and Time-Out
NFD Tunnel Authentication
Reddy Mainampati Udit Parikh Alex Kardomateas
21-2 ICMP(Internet control message protocol)
NFD forwarding pipelines
CMPT 371 Data Communications and Networking
CIS, University of Delaware
Making Routers Last Longer with ViAggre
Ivy Eva Wu.
Byungchul Park ICMP & ICMPv DPNM Lab. Byungchul Park
Distance-Vector Routing Protocols
Internet Networking recitation #12
Internet Control Message Protocol (ICMP)
ECE 544 Protocol Design Project 2016
Improving the Freshness of NDN Forwarding States
Internet Control Message Protocol (ICMP)
Introduction There are many situations in which we might use replicated data Let’s look at another, different one And design a system to work well in that.
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
PRESENTATION COMPUTER NETWORKS
Ajay Vyasapeetam Brijesh Shetty Karol Gryczynski
ECE 352 Digital System Fundamentals
EE 122: Lecture 10 (Congestion Control)
The Transport Layer Reliability
B-Trees.
Department of Informatics Networks and Distributed Systems (ND) group
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Lecture 4a Mobile IP 1.
NFD Tunnel Authentication
Lecture 6, Computer Networks (198:552)
Presentation transcript:

Access Strategy Junxiao Shi, 2016-06-29

Problem

Definition The access router strategy is a forwarding strategy designed for forwarding Interests from a router on the NDN Testbed to laptops directly connected to this router that can serve contents under the local site prefix. access router strategy /arizona/alice MEMPHIS ARIZONA /arizona/alice CAIDA /arizona/bob

Scenario: the last hop on Testbed Several laptops connect to an access router. They are one hop away, with no intermediate router. Links are lossy. The NDN Testbed uses UDP tunnels over public Internet. Packet losses can occur due to congestion. FIB is mostly correct. Remote prefix registration allows a laptop to register a precise prefix, although this doesn't guarantee the laptop can serve all contents under that prefix.

Problem: NCC strategy makes loss unrecoverable NFD v0.2 recommends NCC strategy at the last hop from access router to laptops, but it doesn't work well. In particular, after Interest is forwarded to a laptop, NCC strategy will never retry or retransmit to this laptop until InterestLifetime expires. When packet loss occurs, even if consumer retransmits the Interest, NCC suppresses the retransmission. Realtime applications cannot afford to wait for a regular InterestLifetime. They workaround by attempting to match InterestLifetime with RTT, causing other problems.

Can we just "fix" NCC strategy? No. NCC strategy is designed to exactly mimic CCNx 0.7.2 behavior. It's pretty complex and tightly coupled, and not easily changeable. NCC strategy also has other problems. For example: RTT estimation uses incremental updates, which is inaccurate, especially if the "one level up" prefix doesn't have many children. It should be replaced, not fixed.

Design

Idea Multicast the first Interest to all nexthops. When Data comes back, remember last working nexthop of the prefix; the granularity of this knowledge is the parent of Data Name. Forward subsequent Interests to the last working nexthop. If it doesn't respond, multicast again.

Flowchart (main) send to last working nexthop Y new Interest arrival Y is last working nexthop still in FIB entry, not the downstream, and not violating scope? new Interest arrival Y has measurements for Interest Name? satisfied within RTO? Y N N N multicast to all nexthops except last working nexthop update measurements multicast to all nexthops wait satisfied consumer retransmission DONE InterestLifetime timeout within suppression interval? N Y (do nothing)

Flowchart (update measurements procedure) Data arrival update per-face RTT estimator extend measurements lifetime update per-prefix RTT estimator Data from last working nexthop? Y DONE N record last working nexthop copy from per-face RTT estimator

StrategyInfo in Measurements table Granularity: parent of Data Name eg. for incoming Data /A/B/v1/s0/<implicit-digest>, measurements are stored at /A/B/v1 See reasons on next page. Fields last working nexthop: which upstream satisfied the last Interest under this prefix "satisfy" means "first to respond": if face 1 has responded to an Interest and then face 2 also responds, "last working nexthop" is face 1. per-prefix RTT estimator: an RTT estimator for last working nexthop under this prefix TCP-like mean-deviation algorithm, but no multiplier Lifetime: 8 seconds since any incoming Data

StrategyInfo in Measurements table – granularity Assumption: After first Data is returned, consumer will request sibling Data using exact Name. This assumption is true for: file retrieval: /name/v1/s0, /name/v1/s1, … ping: /site/ping/random0, /site/ping/random1, … Why "parent of Data Name"? Going down to Data Name isn't useful: After Data /A/B/v1/s0 is returned, if there's another Interest for /A/B/v1/s0, it will be satisfied by ContentStore (in most cases) and won't be forwarded. We shouldn't aggregate too much: The producer returning /A/B/v1/s0 doesn't necessarily serve /A/C prefix, but almost always serves /A/B/v1 prefix. What about "two levels up"? Not bad for file retrieval, but isn't suitable for ping.

StrategyInfo in Measurements table – granularity Why not use Interest Name? Interest Name could be too coarse. The producer answering /A with /A/B/v1/s0 doesn't necessarily serve /A/C prefix. Why not follow registered prefix (Routes)? Strategy doesn't have access to RIB. It can only access the longest-prefix-matched FIB entry. RIB/FIB prefix is too coarse.

StrategyInfo in Measurements table – granularity Why not record measurements on multiple/all levels? Recording at multiple/all levels incurs additional overhead, but doesn't bring much benefit: Lack of measurements causes multicasting. Multicasting is limited to those nexthops in FIB entry, which is expected to be less than five, and in many cases only one.

Per-face RTT estimator Assumption: Access router strategy operates on the last hop, and RTT of a particular laptop is mostly constant if processing delay of all apps on that laptop are similar. Instead of having per-prefix-per-face RTT estimators in Measurements table (memory overhead), we keep per- prefix RTT estimator only for the last working nexthop, and have a global per-face RTT estimator. When last working face is added or changed, the state of per-face RTT estimator is copied to per-prefix RTT estimator.

Suppression interval, ie how often retransmission is forwarded constant: 100ms Task 1913 proposes exponential back-off, but it may not be the best solution.

Conceptual Simulation There's no code yet. Run algorithms in your mind.

One precise nexthop Scenario: FIB entry has a single nexthop laptopA. What happens: All Interests go to laptopA, because strategy follows FIB. If laptopA doesn't respond to an Interest, retransmissions are allowed every suppression interval.

Two precise nexthops Scenario: FIB entry points to laptopA and laptopB, both can serve the entire prefix (eg. two synchronized repositories). What happens: First Interest goes to both laptops. Whoever responds faster gets the next Interest. If a subsequent Interest is unanswered by "last working nexthop" (say, laptopA) after RTO, laptopB gets the Interest. If laptopB also doesn't respond, initial retransmission is allowed after RTO + suppression interval (since initial Interest), subsequent retransmissions are allowed every suppression interval.

Two imprecise nexthops Scenario: FIB entry /P points to laptopA and laptopB. laptopA can serve /P/A and /P/AA, laptopB can serve /P/<..> except /P/A and /P/AA. Data Name is at least three components, such as /P/Q/1. What happens for Interest /P/A/<..>: First Interest goes to both laptops. laptopA responds, and becomes "last working nexthop". Subsequent Interests go to laptopA. If laptopA doesn't respond within RTO, Interest is sent to laptopB which won't respond; initial retransmission is allowed after RTO + suppression interval (since initial Interest), subsequent retransmissions are allowed every suppression interval. What happens for Interest /P/B/<..>: similar to above

All wrong nexthops Scenario: all laptops in FIB entry don't respond What happens: All Interests are sent to all laptops, because "last working nexthop" is never learned.

Too short Data names Scenario: What happens: FIB entry /P points to laptopA and laptopB. laptopA has Data /P/A; laptopB has Data /P/B; no other Data exists in the system. What happens: Interest /P/A is sent to both laptops. laptopA responds, and is remembered as "last working nexthop" for /P prefix. Interest /P/B is sent to laptopA. laptopA cannot respond; after RTO it's sent to laptopB. This scenario violates the assumption used in granularity choice.

Too long Data names Scenario: What happens: laptopA serves /P; Interests are named /P/<..>, Data are named /P/<..>/Q/R/S laptopB does not serve /P What happens: Interest /P/1 is sent to both laptops. laptopA responds with /P/1/Q/R/S; "last working nexthop" is recorded on /P/1/Q/R Interest /P/2 is sent to both laptops again; the knowledge on /P/1/Q/R is not effective. This scenario violates the assumption used in granularity choice.

Laptop with fast and slow apps Scenario: laptopA has app /P (delay=10ms) and /Q (delay=50ms) laptopB has app /P (delay=50ms) and /Q (delay=10ms) What happens: First few Interests for two apps learn that: /P's last working nexthop is laptopA with 10ms RTT /Q's last working nexthop is laptopB with 10ms RTT laptopA's RTT is 10ms; laptopB's RTT is 10ms Apps on laptopA fail, but laptop isn't disconnected. Next Interest for /P is sent to laptopA but unanswered, and retried with laptopB, which answers after 50ms. /P's last working nexthop is laptopB with 20ms RTT Next Interest for /P is sent to laptopB, but RTO is inaccurate. This scenario violates the assumption used in global per- face RTT estimator design.