SCPS-TP, TCP and Rate-Based Protocol Evaluation Cindy Tran, Fran Lawas-Grodek, Bod Dimond and Will Ivancic wivancic@grc.nasa.gov 216-433-3494
Presentation Outline TCP Over Wireless Links Testbed Layout Goals TCP Over Wireless Links Testbed Layout Test Philosophy SCPS-TP / TCP /Rate-Based Test Results Conclusions
Space-Base Protocol Testing Goal Provide an objective, scientific analysis of currently available and proposed protocols for space-based networks. Where do they work? Where do they fall apart? How can they be improved or fixed? What is the state of their maturity? Determine which protocol is appropriate for a given scenario. Remove the marketing hype from the space-based protocol discussions.
TCP over Satellite/Wireless Links TCP slow start takes a long time to reach equilibrium log2 (bandwidth × delay) round-trip times (RTTs) poor performance over long fat networks particularly for short flows on retransmission timeout, TCP enters slow start again poor performance over lossy high capacity links TCP infers congestion on all packet drops even if the loss is due to packet corruption due to noise TCP congestion avoidance throttles source unnecessarily TCP sends a burst of packets when window opens this can cause congestion drops in intermediate routers
Reliable Transport Protocol Testbed Emulated Topology Reliable Transport Protocol Testbed
SCPS-TP / TCP Testing Philosophy Tune and baseline protocol on error-free link for each bandwidth-delay product. Both SCPS-TP and TCP where tuned for best performance over the given delay. Record all measurements, not just optimal runs! Minimum of 30 runs for congestion friendly protocols and 20 runs for rate-based protocols. Measurement time is from SYN to FIN Run single flows and multi-flows (3 connections) to ensure accurate reporting and application of results. Capture and save some complete trace files – particularly when the unexpected is occurring.
Protocol Test Results
SCPS-TP and TCP Tests Single Stream Baseline with no congestion Multi-Stream with three sources and sinks for congestion control algorithm testing. Solaris Operating System 100 BaseT Interfaces Binomial Error Distributions 1E-8, 1E-7, 1E-6, 1E-5 Packet Size 1024 Bytes Delay 10 msec, 250 msec, 500 msec
Rate-Based Protocols Investigate COTS solutions which tend to be directed at multicast applications MFDP, MDP, Digital Fountain, others Are commercial implementations available and usable? Are multicast-based implementations overly complex? Should unicast protocols be developed? Congestion is controlled by network owner rather than protocol. Therefore, multi-stream tests where not considered necessary.
Theoretical Steady State Throughput TCP Performance equitation is from Mathis, M. et al, "The Macroscopic Behavior of the Congestion Avoidance Algorithm",Computer Communications Review, volume 27, number 3, July 1997. Delay Tolerant Increasing Delay
Another Example of TCP Steady State Performance Don’t Use TCP for long delays. However, one can still use IP and a rate-base protocol. Chart is from “Why not use the Standard Internet Suite for the Interplanetary Internet?” By Robert C. Durst, Patrick D. Feighery, Keith L. Scott
TCP Throughput at 250 msec RTT Slow Start Phenomena
TCP Standard Deviation for 30 Trials When the first error occurs has a large effect on TCP throughput as shown by the standard deviations for 1E-8 and 1E-7
Protocol Throughput 500 msec Delay / 10 Mbyte File, Single Flow All Rate-Based protocols need work to reach their full potential.
Protocol Throughput 500 msec Delay / 10 Mbyte File, Single Flow
Protocol Throughput 250 msec Delay / 10 Mbyte, Single Flow Slight improvement in Congestion Friendly Protocols At 250 msec vs 500 msec Delay
Protocol Throughput 500 msec Delay / 1 Mbyte File, Single Flow The smaller the file, the more important the startup algorithms become – except for EXTREMELY small transactions such as command and control.
MDP Tuning (Packet Size 1024 Bytes, Delay 500 msec) Receiver cannot keep up and performance degrades rapidly at transmission settings greater than 40 Mbps Our Operating System and hardware could keep up with the current MDP implementation at rates up to 20 Mbps Greatest throughput achievable at Transmission rate settings of 35 – 40 Mbps Rate Setting
Performance is delay tolerant, but Protocol Throughput SCPS-TP Pure Rate Based – Ack Every Packet, Single Flow Increasing Delay Initialization and Termination Performance is delay tolerant, but not completely insensitive to delay.
Multi-stream Testing of Congestion-Friendly Protocols Necessary Only truly meaningful test for congestion-friendly protocols (need to add congestion!) Single steam tests provide baseline, but that is all. Work in Progress Many Problems Getting SCPS to operate correctly in multi-streaming configuration. Solaris appears to be most stable system. BSD is very buggy for both SCPS-TP (all cases) and TCP (over long delays). For single machine operation, SCPS has to be recompiled as three applications and then performs load sharing which is undesirable for this emulation. Using three steams competing for bandwidth. Using three senders and three receivers due to load sharing occurring in SCPS for single machine sending three streams.
SCPS Gateway Working on evaluation of SCPS gateway using MITRE provided code. Gateway is operation Testing is just beginning
Conclusions Very small transactions such as command and control should see little difference in performance for TCP or any variant of SCPS-TP or a rate-based protocol. From the single stream tests and preliminary multi-stream tests, there does not appear to be a significant advantage to deploying SCPS-TP over TCP. In extremely errored environments with high RTT delays, a rate-based protocol is advisable if you properly engineer the network. Beware of using rate-based protocols on shared networks unless you can reserve bandwidth. All rate-based protocols need tested work and/or faster machines to reach their potential. For the commercial rate-based protocols tested, this may be due to their algorithms and coding being optimized for multicast operation. New TCP research may dramatically improve TCP operation for near planetary environments. TCP Pacing with Packet Pair Probing, TCP Westwood, TCP Explicit Transport Error Notification (ETEN).