Download presentation
Presentation is loading. Please wait.
Published byTarja Hiltunen Modified over 6 years ago
1
Final Review CS144 Review Session 9 June 4, 2008 Derrick Isaacson
Maria Kazandjieva Ben Nham
2
Announcements Upcoming dates Final Exam: June 6, 12:15 p.m.
3
Final Review Physical & Link Layers NIC Hardware Wireless Link Layers
Wireless Routing Network Coding Security SIP
4
Physical & Link Layers Chips vs. bits – chips are data transferred at physical layer, bits are data above physical layer Encoding motivations DC balancing, synchronization Can recover from some chip errors Higher encoding -> fewer bps but more robust Lower encoding -> more bps but less robust
5
Link Layer Single-hop addressing (Ethernet address)
Media Access Control (MAC) – regulate access to shared medium and maximize efficiency Time Division Multiple Access (TDMA) Carrier Sense Multiple Access, Collision Detection (CSMA/CD) Carrier Sense Multiple Access, Collision Avoidance (CSMA/CA) Request-to-send, clear-to-send (RTS/CTS)
6
More Link Layer Collision Detection Collision Domain
Constrains max length of wire/ min length of segment Randomized exponential backoff on collision detection Less efficient use of link when there are a high number of collisions Collision Domain Hubs connect segments to create a larger shared collision domain Switches store and forward packets from separate collision domains
7
NIC Hardware & OS Overview
Hardware user/kernel boundary – expensive to switch between modes System calls – calls into kernel on behalf of currently running process Interrupts – code not acting on behalf of current process, NIC generated, TCP/IP processing OS gives each process virtual address space for fault isolation Paging – divide memory into chunks and map between virtual and physical pages of memory Device communication – between processor and device over I/O bus Memory mapped devices Special I/O instructions DMA
8
More NIC/OS Expensive context switches can affect networking performance TCP push bit gives hint to OS when to wake up listening process Send and receive packets in batches Minimize latency for TCP Device driver architecture Polling – loop asking card when buffer free/ has packet – Wastes CPU, high latency if you schedule poll later Interrupt driven – most OSes use this – low latency but expensive and poor performance for high-throughput scenarios Best is adaptive algorithm between interrupts and polling Socket implementation – buffering Need to encapsulate data easily Solution – don’t store packets in contiguous memory
9
Wireless Link Layers & Wireless Routing
See section 8 slides
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.