Offense by Ionut Trestian Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks or: How I Learned to Stop Worrying and Love the Bomb Offense by Ionut Trestian
All the great things that came out of this paper DDoS Botnets
How did all this happen? Some history The Internet was started as an academic network That meant that all entities involved were: Trustworthy If not, easily identifiable I agree that some of the choices made in this paper made sense at that time but …
How did all this happen? Some history Trust region
How did all this happen? Some history It made sense at the time for a few reasons Almost everyone was trustworthy (more or less) No real profit from cheating the system (bigger download speeds, but what to use them for) Technology wasn’t so great at the time so network elements (routers, switches) were dumbed down
How did all this happen? The Internet becomes ubiquitous Trust region Trust region $
A lot of changes since this paper The million dollar question is: Where to implement fairness? Endpoints Good idea in 1989 because endpoints were honest Not so good idea now because the Internet has no accountability Routers In 1989 routers couldn’t handle such complexity Seems a good idea now but cannot mess with old paradigms
Routers - Back then and today Cisco AGS Motorola 68000 (about 8 MHz) processor and 1MB of memory, and two 2E dual-Ethernet line cards Cisco CRS 1000 line-cards at 40 Gbit/s Every line-card has a 1.2 GHz processor
Routers – How did they evolve? Speed Fairness
Where are we today? We have unfair endpoints and routers who cannot impose fairness Therefore we have an unfair network This leads to the well known problems of today’s Internet: Spam DDoS attacks by Botnets ... And it all happened because of papers like this who advocated a false sense of fairness which is obtained only when everyone behaves
Other issues Not all applications get along with the “saw tooth”-like behavior given by the TCP AIMD (multiplicative decrease usually by factor of 2) That means that one designs an application to deal well with TCP (TCP friendliness or compatibility !!) For example streaming or real-time-audio This leads to a constant tweaking of AIMD to deal well with new applications – IIAD, SQRT, ETC
Other issues What if all the signs of congestion are there but the issue is not congestion? Authors didn’t imagine all transmission media Links of today have different characteristics as the authors imagined (wireless)
Other issues Lots of important stuff left at the end Delays in feedback Asynchronous operation Estimating the number of users using the resource
Conclusions Really bad paper, among the ones that did a lot of damage to the existing Internet Lots of other small issues dealing with the constant re-tweaking of the inputs based on whatever is on top Not clear how to detect congestion