E2E Arguments & Project Suggestions (Lecture 4, cs262a)

Slides:



Advertisements
Similar presentations
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Internet architecture Slides used with permissions.
Advertisements

Threads, SMP, and Microkernels
Layering and the network layer CS168, Fall 2014 Sylvia Ratnasamy
6.033: Intro to Computer Networks Layering & Routing Dina Katabi & Sam Madden Some slides are contributed by N. McKewon, J. Rexford, I. Stoica.
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
Introduction CSCI 444/544 Operating Systems Fall 2008.
1 Katz, Stoica F04 EECS 122: Introduction to Computer Networks Network Architecture Computer Science Division Department of Electrical Engineering and.
CS 268: Lecture 2 (Layering & End-to-End Arguments)
End-To-End Arguments in System Design J.H. Saltzer, D.P. Reed, and D. Clark Presented by: Ryan Huebsch CS294-4 P2P Systems – 9/29/03.
EE 122: Layering and the Internet Architecture Kevin Lai September 4, 2002.
1 EE 122: Internet Design Principles Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks to Vern.
1 Link Layer & Network Layer Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
Plan of attack ‣ What is a network made of? ‣ How is it shared? ‣ How do we evaluate a network? ‣ How is communication organized? 1.
CS 268: Lecture 4 (Internet Architecture & E2E Arguments)
LECTURE 9 CT1303 LAN. LAN DEVICES Network: Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and.
CS 268: Lecture 3 (Layering & End-to-End Arguments)
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 Taj Mahal, Agra, India.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 5: Internet architecture Slides used with.
1 Next Few Classes Networking basics Protection & Security.
CS 268: Internet Architecture & E2E Arguments Scott Shenker and Ion Stoica (Fall, 2010) 1.
TCOM 509 – Internet Protocols (TCP/IP) Lecture 03_b Protocol Layering Instructor: Dr. Li-Chuan Chen Date: 09/15/2003 Based in part upon slides of Prof.
Processes Introduction to Operating Systems: Module 3.
TCP/IP Protocol Architecture CSE 3213 – Fall
CS 4700 / CS 5700 Network Fundamentals
End-To-End Arguments in System Design J.H. Saltzer, D.P. Reed, and D. Clark Presented by: Amit Mondal.
1: Operating Systems Overview 1 Jerry Breecher Fall, 2004 CLARK UNIVERSITY CS215 OPERATING SYSTEMS OVERVIEW.
Computer Networking Michaelmas/Lent Term M/W/F 11:00-12:00
Unit 9: Distributing Computing & Networking Kaplan University 1.
Rehab AlFallaj.  Network:  Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and do specific task.
CS 3700 Networks and Distributed Systems
CS 4700 / CS 5700 Network Fundamentals Lecture 3: Internet Architecture (Layer cake and an hourglass) Revised 1/11/16.
Introduction to Operating Systems Concepts
Modularity Most useful abstractions an OS wants to offer can’t be directly realized by hardware Modularity is one technique the OS uses to provide better.
March 14, 2011 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 14 Protocols, Layering and e2e.
Presented by Muhammad Abu Saqer
Software Design and Architecture
CS 268: Lecture 3.
Hierarchical Architecture
CT1303 LAN Rehab AlFallaj.
EE 122: Review for 1st Midterm
#01 Client/Server Computing
Apache Spark Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Aditya Waghaye October 3, 2016 CS848 – University.
ECE 4400:427/527 - Computer Networks Spring 2017
Taj Mahal, Agra, India T. S. Eugene Ng eugeneng at cs.rice.edu Rice University.
Administrative stuff TA: Almudena Konrad Paper reviews:
Anthony D. Joseph and Ion Stoica
E2E Arguments & Project Suggestions (Lecture 4, cs262a)
Some Takeaways… (Lecture 26, cs262a)
Starting Design: Logical Architecture and UML Package Diagrams
Review of Networking and Design Concepts (II)
Internet Protocols IP: Internet Protocol
Prof. Leonardo Mostarda University of Camerino
EE122: Potpourri November 24, 2003.
Why Threads Are A Bad Idea (for most purposes)
CS639: Data Management for Data Science
System calls….. C-program->POSIX call
CSE 542: Operating Systems
Why Threads Are A Bad Idea (for most purposes)
Why Threads Are A Bad Idea (for most purposes)
CSE 542: Operating Systems
Announcements You need to register separately for the class mailing list and online paper review system. Do it now so that we can work out any “bugs”.
COS 518: Distributed Systems Lecture 11 Mike Freedman
November 26th, 2018 Prof. Ion Stoica
Lecture 29: Distributed Systems
CS639: Data Management for Data Science
#01 Client/Server Computing
Presentation transcript:

E2E Arguments & Project Suggestions (Lecture 4, cs262a) Ion Stoica, UC Berkeley September 7, 2016 3-D graph Checklist What we want to enable What we have today How we’ll get there

Software Modularity Break system into modules: Well-defined interfaces gives flexibility Change implementation of modules Extend functionality of system by adding new modules Interfaces hide information Allows for flexibility But can hurt performance

Network Modularity Like software modularity, but with a twist: Implementation distributed across routers and hosts Must decide: How to break system into modules Where modules are implemented

Layering Layering is a particular form of modularization System is broken into a vertical hierarchy of logically distinct entities (layers) Service provided by one layer is based solely on the service provided by layer below Rigid structure: easy reuse, performance suffers

The Problem Re-implement every application for every technology? FTP NFS HTTP Application Coaxial cable Fiber optic Packet radio Transmission Media Re-implement every application for every technology? No! But how does the Internet architecture avoid this?

Solution: Intermediate Layer Introduce an intermediate layer that provides a single abstraction for various network technologies A new app/media implemented only once Variation on “add another level of indirection” p2p HTTP Application SSH NFS Intermediate layer Transmission Media Coaxial cable Fiber optic Packet radio

Placing Functionality Most influential paper about placing functionality is “End-to-End Arguments in System Design” by Saltzer, Reed, and Clark “Sacred Text” of the Internet Endless disputes about what it means Everyone cites it as supporting their position

Basic Observation Some applications have end-to-end performance requirements Reliability, security, etc Implementing these in the network is very hard: Every step along the way must be fail-proof Hosts: Can satisfy the requirement without the network Can’t depend on the network

Example: Reliable File Transfer Host A Host B Appl. Appl. OK OS OS Solution 1: make each step reliable, and then concatenate them Solution 2: end-to-end check and retry

Discussion Solution 1 not complete Solution 2 is complete What happens if any network element misbehaves? Receiver has to do the check anyway! Solution 2 is complete Full functionality can be entirely implemented at application layer with no need for reliability from lower layers Is there any need to implement reliability at lower layers?

Take Away Implementing this functionality in the network: Doesn’t reduce host implementation complexity Does increase network complexity Probably imposes delay and overhead on all applications, even if they don’t need functionality However, implementing in network can enhance performance in some cases E.g., very lossy link

Conservative Interpretation “Don’t implement a function at the lower levels of the system unless it can be completely implemented at this level” Unless you can relieve the burden from hosts, then don’t bother

Radical Interpretation Don’t implement anything in the network that can be implemented correctly by the hosts E.g., multicast Make network layer absolutely minimal Ignore performance issues

Moderate Interpretation Think twice before implementing functionality in the network If hosts can implement functionality correctly, implement it a lower layer only as a performance enhancement But do so only if it does not impose burden on applications that do not require that functionality

Summary Layering is a good way to organize systems (e.g., networks) Unified Internet layer decouples apps from networks E2E argument encourages us to keep lower layers (e.g., IP) simple

Projects Suggestions

Spark, a BSP System … … tasks (processors) tasks (processors) RDD RDD Shuffle stage (super-step) stage (super-step)

Barrier implicit by data dependency Spark, a BSP System all tasks in same stage implement same operations, single-threaded, deterministic execution tasks (processors) … tasks (processors) … RDD RDD Shuffle Immutable dataset Barrier implicit by data dependency stage (super-step) stage (super-step)

Scheduling for Heterogeneous Resources Spark: assumes tasks are single-threaded One task per slot Typically, one slot per core Challenge: a task my call a library that Is multithreaded Runs on other computation resources, GPUs Generalize Spark’s scheduling model

BSP Limitations BSP, great for data parallel jobs Not best fit for more complex computations Linear algebra algorithms (multiple inner loops) Some ML algorithms

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4]

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 h1 x[0] x[1] x[2] x[3] x[4]

Example: Recurrent Neural Networks for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 t = 0 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 t = 0 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks for t in range(num_steps): > h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 t = 0 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) > h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 t = 0 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) > h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 t = 0 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks y[0] for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) > y = rnn.fourth_layer(h3) h3 h2 t = 0 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks y[0] for t in range(num_steps): > h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 t = 1 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks y[0] for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) > h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 t = 1 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks y[0] for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) > h3 = rnn.third_layer(h2, h3) y = rnn.fourth_layer(h3) h3 h2 t = 1 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks y[0] y[1] for t in range(num_steps): h1 = rnn.first_layer(x[t], h1) h2 = rnn.second_layer(h1, h2) h3 = rnn.third_layer(h2, h3) > y = rnn.fourth_layer(h3) h3 h2 t = 1 h1 x[0] x[1] x[2]

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

Example: Recurrent Neural Networks y[0] y[1] y[2] y[3] y[4] x[t]: input vector at time t (e.g., a frame in a video) y[t]: output at time t (e.g., a prediction about the activity in the video) hl: initial hidden state for layer l h3 h2 h1 x[0] x[1] x[2] x[3] x[4] blue - task completed red - task running - dependence ready - dependence unready

How would BPS work? y[0] y[1] y[2] y[3] y[4] h3 h2 h1 x[0] x[1] x[2]

How would BPS work? y[0] y[1] y[2] y[3] y[4] BSP assumes all tasks in same stage run same function: Not the case here! h3 h2 h1 x[0] x[1] x[2] x[3] x[4]

How would BPS work? y[0] y[1] y[2] y[3] y[4] h3 h2 h1 x[0] x[1] x[2]

How would BPS work? y[0] y[1] y[2] y[3] y[4] BSP assumes all tasks in same stage operate only on local data: Not the case here! h3 h2 h1 x[0] x[1] x[2] x[3] x[4]

Ray: Fine grained parallel execution engine Goal: make it easier to parallelize Python programs, in particular ML algorithms add(a, b): return a + b … x = add(3, 4) Python @ray.remote add(a, b): return a + b … x_id = add.remote(3, 4) x = ray.get(x_id) Ray

Another Example import ray @ray.remote def f(stepsize): # do computation… return result # Run 4 experiments in parallel results = [f.remote(stepsize) for stepsize in [0.001, 0.01, 0.1, 1.0]] # Get the results ray.get(results)

Ray Architecture … Nodes System State & Message Bus Object Store Driver: run a Ray program Worker: execute Python functions (tasks) Object Store: Stores python objects Use shared memory on same node Global scheduler: schedule tasks based on global state Local scheduler: schedule tasks locally System State & Msg Bus: store up-to-date state control state of entire system and relay events between components Nodes System State & Message Bus Object Store Object Table Function Table Object Manager Task Table … Driver Driver Driver Driver Worker Driver Local Scheduler Event Table Global Scheduler Global Scheduler Global Scheduler

Ray Architecture … Nodes System State & Message Bus Object Store Object Table Object Store: could evolve into storage for Arrow Backend: could evolve into RISE microkernel Function Table Object Manager Task Table Driver Driver … Driver Driver Worker Driver Local Scheduler Event Table Global Scheduler Global Scheduler Global Scheduler

Ray System Instantiation & Interaction Node 1 Node 2 Object Store Object Store put, get Distributed Object Store … Driver Worker 1 Worker 2 Object Manager Object Manager submit execute transfer, evict Local Scheduler Local Scheduler submit submit System State & Message Bus (shared, sharded) execute submit Global Scheduler

Example N1 N2 N2 Driver Worker System State & Message Bus Object Store @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) Local Scheduler Local Scheduler Global Scheduler

Example N2 N1 Worker Driver System State & Message Bus Object Store Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) Local Scheduler Local Scheduler Global Scheduler

Example N2 N1 Worker Driver 3 System State & Message Bus Object Store v_id 3 Worker Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1 Object Table Local Scheduler Local Scheduler Global Scheduler

Example N2 N1 Worker Driver 3 System State & Message Bus Object Store v_id 3 Worker Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1 Object Table task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker System State & Message Bus Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1 Object Table task_id fun_id, v_id, 4 Task Table x_id = add.remote(v_id, 4) Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker System State & Message Bus Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1 Object Table remote() invocation non-blocking task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker System State & Message Bus Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b ray.get() blocks waiting for remote function to finish @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1 Object Table task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker System State & Message Bus Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1 Object Table task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker System State & Message Bus Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1,N2 Object Table task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker System State & Message Bus Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1,N2 Object Table add(v_id, 4) task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker 7 System State & Message Bus Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table x_id 7 fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1,N2 Object Table x_id N2 task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker 7 7 System State & Message Bus Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table x_id 7 x_id 7 fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1,N2 Object Table x_id x_id N2,N1 N2 task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Example N1 N2 Driver 3 3 Worker 7 7 System State & Message Bus x = 7 Object Store Object Store Driver v_id 3 v_id 3 Worker Function Table x_id 7 x_id 7 fun_id add(a, b)… @ray.remote add(a, b): return a + b @ray.remote add(a, b): return a + b … v_id = ray.put(3) x_id = add.remote( v_id, 4) x = ray.get(x_id) v_id N1,N2 Object Table x = 7 DONE! x_id N2,N1 task_id fun_id, v_id, 4 Task Table Local Scheduler Local Scheduler Global Scheduler

Project & Exam Dates Wednesday, 9/7: google doc with project suggestions Include other topics, such as graph streaming Monday, 9/19: pick a partner and send your project proposal I’ll send a google form to fill in for your project proposals Monday, 10/12: project progress review More details to follow Wednesday, 10/5: Midterm exam