SUPPORTING MULTIMEDIA COMMUNICATION OVER A GIGABIT ETHERNET NETWORK - BERNARD DAINES, JONATHAN C.L. LIU AND K.SIVALINGAM presented by: Sainath Morpoju (16801102) Srinivas Medapati (17439921)
WHY GIGABIT ETHERNET ? Streaming multimedia, VR, high-performance distributed computing, distance learning National backbone network speed is 155 – 622 Mbps A typical video server generates 2.88 Gbps traffic from the storage subsystem Most of the information available on the internet is multimedia traffic Information is shared among people using multimedia applications on their mobile phone. Paper leverages the exsisting CDMA protocol used for wireless communication and extends it to include multimedia support as well. Due to the exponential increase in Internet traffic there is now a fair amount of unpredictability on the performance and efficient multimedia networking is not a trivial task. There are many challenges
GIGABIT NETWORK COMPONENTS Gigabit Network Interface Card ( GNIC ) Buffered Hub / full-duplex repeater ( FDR ) Gigabit Routing Switch ( GRS ) is the amount of digital data that is moved from one place to another in a given time. The BER is calculated by comparing the transmitted sequence of bits to the received bits and counting the number of errors.
GNIC (Gigabit Network Interface Card)
IEEE 802.3z MAC frame format for gigabit speed ethernet Interframe gap is 0.16 microseconds Mobile phones use this technology called CDMA Concept here is that .. This allows several users to share a band of frequencies (see bandwidth). To permit this without undue interference between the users, CDMA uses a special coding scheme (where each transmitter is assigned a code). Responsible to produce interferences between different users’ signals.
GNIC DESIGN ISSUES IEEE 802.3z frame compatibility Operating at line speeds Reducing host CPU utilization
GNIC ARCHITECTURE Descriptor based DMAs (direct memory access) to reduce host CPU utilization Dual burst FIFOs for operating at line speeds GMAC implements IEEE 802.3z standard Packet buffer to avoid lost packets
FDR (full duplex repeater)
Need for FDR ? In a typical Ethernet, MAC layer implements CSMA/CD Collision detection and retransmissions are inefficient Hub is a simple repeater (one to all), no processing of link layer data Switches are intelligent (one to one) but use specialized hardware so cost is high FDR uses a simple design without any ASICs
FDR Architecture Every port has an input buffer and an output buffer A 1000 Mbps forwarder bus A frame forwarder which picks a port to transmit, in a round robin fashion for fair allocation of bandwidth Full duplex transmission possible without collisions Allows congestion control using IEEE 802.3x protocol to notify sender if port buffer is full
GRS (Gigabit Routing Switch)
PE-4884 Architecture It has 14 slots, 12 for interface / channel cards and 2 for EMMs (enterprise management module) 4884 backplane has a 52 Gbps capacity Every channel card is connected to central memory through two non-blocking full duplex gigabit channels
Cross-bar Architecture Issues Port-based memory : Inefficient use of available memory, even inactive ports have memory allocated Head-of-line blocking : Packets destined for a busy receive port will block packets in queue Difficult to provide reliable QoS support : Limited abilities to provide traffic based on priority
Shared Memory Architecture Typically used for 10 Mbps and 100 Mbps switches All ports access a shared memory pool Access to the central memory is granted by an arbitration device For Gigabit speeds arbitration device cannot run fast enough to be non-blocking
Parallel Access Shared Memory All ports have simultaneous full-duplex gigabit access to the centralized memory pool No need to replicate and store multicast traffic on multiple egress ports Provides the ability to pull outbound traffic from dedicated port based priority queues
EXPERIMENTS 3 experiments were conducted to test the gigabit ethernet The experiments were based on an echo server, a file server, and a video server
EXPERIMENT 1
Experiment 1 : Echo Server Both machines are fitted with GNIC and use a special device driver ( yellowfin.c ) Netperf and netserver were used to measure peak performance. Custom code was written to do accurate benchmarking ( used TCP_NODELAY to turn off nagle’s algorithm which groups smaller packets ) Linux and NT systems were tested
Results For Linux platforms a max throughput of 180 – 200 Mbps is achieved For NT platforms the performance is lower at 90 Mbps and equals that of linux machines with a faster processor ( DEC alpha ) PCI interaction and internal protocol stack are believed to be the main reasons for lower performance
EXPERIMENT 2 : File Server Pentium Pro 2 ( 266 Mhz ) running NT Server 4.0 was setup as the file server Clients request the fileserver for files and measure throughput The sum of throughputs of all clients is recorded using NetBench All machines were connected using FDR
Experiment 3 : Video on demand Designed to measure concurrent accesses for on-demand video titles Implemented the two-buffer scheme Maintaining QoS ( < 1% jitter ) with maximum concurrent accesses
Results 32 concurrent streams could be maintained at 4Mbps ( throughput 128 Mbps ) 19 concurrent streams were supported at 8Mbps 9 concurrent streams were supported at 16Mbps Only 3 concurrent streams could be supported at 32 Mbps ( very high end video )
Conclusions From the performance results it was observed that gigabit speeds at the interface card level can be achieved High-end workstations like SUN UltraSPARC and DEC Alpha chips are able to achieve 400-500 Mbps at TCP level In the near future processing latency can be further reduced through simplified transport protocols, faster processors and increased bus widths
References https://www.cise.ufl.edu/class/cnt6885fa18/spaper01.pdf http://www.visualcapitalist.com/internet-minute-2018/
Q/A
THANK YOU