Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Nimrod Architecture RFC 1992 Presented By Sai H. Lek October 2, 2003.

Similar presentations


Presentation on theme: "The Nimrod Architecture RFC 1992 Presented By Sai H. Lek October 2, 2003."— Presentation transcript:

1 The Nimrod Architecture RFC 1992 Presented By Sai H. Lek October 2, 2003

2 Introduction  Nimrod is a scalable routing architecture designed to accommodate a continually expanding and diversifying internet- work.  First suggested by Noel Chiappa  Has undergone revision and refinement through the efforts of the Nimrod working group of the IETF.

3 Limits of the Current Routing Architecture  Fast routing table growth. Number of entries in the BGP routing table has grown from around 15,000 to around 105,000 from 1994 to 2002.  Instability. Rate of route advertisements and withdrawals is increasing. Makes the scalability problem even worse, puts global routing system in frequent transient and inconsistent states.

4 Limits of the Current Routing Architecture  Slow convergence. Takes certain amount of time for the routing system to reach a consistent state. Routing system is in a transient state, which may result in routing failures.

5 Overview of Nimrod  A project which aims, in part, to produce a next-generation routing architecture for the Internet.  Also, more generally, to try and produce a basic design for routing in a single global-scale communication substrate.  A design which will prove sufficiently flexible and powerful to serve into a future as yet unforeseeable.

6 Overview of Nimrod Nimrod does this through the conjunction of two powerful. basic mechanisms:  distribution of maps, as opposed to distribution of routing tables.  selection of routes by clients of the network, not by the switches in the network. This approach is known as unitary routing or explicit routing.

7 Overview of Nimrod  In Nimrod the route not have to be chosen by the actual source, but can be the responsibility of an agent working on the source's behalf.  Path is not selected in a fully distributed, hop-by-hop manner, in which each switch has an equal role to play in selecting the path.

8 Overview of Nimrod  Maps of the network's actual connectivity (maps which will usually include high-level abstractions for large parts of that connectivity, just the 'important' ones) are made available to all the entities which need to select paths.  Those entities use these maps to compute paths, and those paths are passed to the actual switches, along with the data, as directions on how to forward the data.

9 Subsystems of the Internetwork Layer  The Nimrod routing architecture springs, in part, from a design vision that sees the entire internetwork layer, although distributed across all the hosts and routers of the internetwork, as a single system.  Simply a number of the subsystems of this larger system, the internetwork layer.  Not intended to be a purely standalone set of subsystems, but rather, to work together in close concert with the other subsystems of the internetwork layer to provide the internetwork layer service model.

10 Subsystems of the Internetwork Layer  Subsystem is much clearer and easier to manage if the routing is broken up into several subsystems, with the interaction between them open.  Note that Nimrod was initially broken up into separate subsystems for purely internal reasons..

11 Subsystems of the Internetwork Layer The subsystems which comprise the functionality covered by Nimrod are: The subsystems which comprise the functionality covered by Nimrod are:  Routing information distribution (in the case of Nimrod, topology map distribution, along with the attributes [policy, QOS, etc.] of the topology elements)  Route selection  User traffic handling

12 Subsystems of the Internetwork Layer  Routing information distribution can be defined fairly well without reference to other subsystems.  Route selection might involve finding out which links have the resources available to handle some required level of service.  For user traffic handling, routing is tied in with other subsystems

13 General Principles  Design philosophy of Nimrod is 'maximize the lifetime (and flexibility) of the architecture.  Design tradeoffs that will adversely affect the flexibility, adaptability and lifetime of the design are not necessarily wise choices.  It might be the correct choices in a stand-alone system, where the replacement costs are relatively small. In the global communication network, the replacement costs are very much higher.

14 Design Principles for Nimrod  Do the best possible design, and then work out how to deploy it; Don't simply incrementally improve the existing system.  Maximize the lifetime by: –Making the design as flexible as possible. –Minimizing the mandatory, system wide part of the architecture.

15 Design Principles for Nimrod  Maximize robustness by: –Reducing the dependencies among various components. –Use of redundant mechanisms where plausible.  Break the routing up into a number of subsystems, with the interfaces between them visible, and available, to the users.

16 Advantages  Allows maximal overall flexibility (i.E. System lifetime)  Is securable against explicit attack  Interacts well with resource allocation systems  Allows maximal robustness  Interacts well with flexible abstraction mechanisms  Allows policy routing  Allows maximally flexible policy routing

17 Additional Nimrod Architectural Points  Provision for efficient handling of Datagram service mode packets  Using a mesh of Datagram mode flows (DMF's)  Complete routing system, down to the lowest levels

18 Nimrod Mechanisms Addresses  Use of new addresses, called locators Characteristics are:  Variable length  Variable number of levels  Can name topology aggregates as well as individual network elements  Not intended to descend from a fixed root, but will instead by built in a natural "bottom-up" style

19 Maps  Connectivity information in the form of maps is the key data item in Nimrod.  The data which routers pass around form the database which is used to selects paths.

20 Maps  Nimrod maps consist of: Nodes (an open-ended list of attributes). Arcs (uni-directional links which connect nodes, and cannot have attributes; They simply indicate connectivity).  In general, it is not required that different routers have consistent maps.

21 Attributes Attributes might include: Attributes might include:  Bandwidth  Delay  Delay variance  Error rate  Cost  Allowed users (e.g. government only)  Each node also has some inherent attributes, i.e. ones which each node must have:  The locator of the node  The connectivity of the node

22 Path Selection  Conceptually done by the users of the network, i.e. hosts.  However, many hosts will not want to deal with keeping maps, running path-selection algorithms.

23 Path Selection  Expected that most hosts will get paths from route servers.  Route servers form a central point at which an organization can express organizational policies with regard to path preferences through the network outside the organization.  Path selection across a large graph, with multiple constraints, is a difficult problem, and will probably be the subject of future research.

24 User Data Forwarding To support four basic modes for handling user data traffic:  Flow mode. The user does a prior flow setup (which is also part of Nimrod), which can include arbitrarily complex arrangements for resource allocation, security, etc; packets then carry only flow-identifiers for doing forwarding.  Node Chain mode. Packets carry a list of nodes, and the packet is required to go through the nodes listed, which should define a continuous path across the network. (This is basically strict source route.).

25 User Data Forwarding  Node Sequence mode. Packets carry a list of nodes, but the list does not define a continuous path across the network, merely points the packet has to travel through. (This is basically loose source route.).  Datagram mode. where every packet header carries source and destination locators. (This is basically normal IPv4-type forwarding.). The first and last modes are intended to be the ones principally. used; the others are for special situations, fault isolation, critical. network operations, etc.

26 Datagram Mode  Flows are the fundamental entity in Nimrod.  No packet travels anywhere except in a flow.  In Datagram mode service, Nimrod routers will assign a packet to a sequence of Datagram Mode Flows (DMF's).  DMF are relatively short-distance flows, set up specifically to handle Datagram mode packets.  The network is completely spanned by a mesh of DMF's.

27 Datagram Mode  The packet traverses the network by being assigned to a sequence of DMF's.  A sequence which is specifically selected to move the packet towards its eventual destination.  Datagram mode packet headers contain: A locally usable path-id field. Source and destination locators. A pointer into the locators.

28 Datagram Mode Operation  While in a DMF, the forwarding routers do not examine the locators, only the flow-id.  Only active routers, one that actually makes a decision about where to send the packet, look at the locators.  At each active router, the router examines the locators in the header to see where to send the packet next.  Assigns the packet to the appropriate flow, and sends the packet off.

29 Datagram Mode Flows  All routers have to contain a minimal set of pre-setup Datagram Mode flows to certain routers at critical places in the abstraction hierarchy.  These flows are used to carry Datagram mode packets through the network.  It is purely a local decision which of those flows to set up.  There is a minimum set of flows which do have to be able to be set up for the system to operate.

30 Datagram Mode Efficiency and Robustness  The forwarding of Datagram mode packets can be quite efficient (possibly more so than even standard hop-by-hop).  In the non-active routers, the packet is associated with a flow.  In active routers, the process of looking up the next DMF would be about as expensive as the current routing table lookup.

31 Datagram Mode Efficiency and Robustness  It can easily be seen that the process guarantees that the resulting path is loop-free: Each flow selected must necessarily get the packet closer to its destination. The flows themselves are guaranteed not to loop when their paths are selected, prior to being set up.

32 Multicast Support  Nimrod approaches multicast with the same ideas used elsewhere: Try and break the problem up into pieces. Put as much of the functionality as possible outside the architecture. Allow flexibility in algorithms, etc. This last is especially important for multicast, where group sizes can vary by many orders of magnitude.

33 Multicast Support  Nimrod separates several distinct phases of creating a multicast group: - Determining membership - Deciding what kind of data distribution you want (per source, or whatever) - Calculating one or more spanning trees which connect the members - Installing the state about those spanning trees in the routers - Forwarding user data

34 Multicast Support  Nimrod provides some very fundamental multicast building block(s), such as multicast flow setup.  All the rest (like the mechanisms to maintain the state about group membership, calculate the spanning tree, etc) are outside the core architecture.  Users can select locally whichever algorithm makes sense in their particular application.

35 Multicast Support  This has all the advantages of the Nimrod approach to unicast: - Makes it a simpler design - Makes it less likely there will be something wrong - Allow experiments with new algorithms - Allows incremental deployment of new algorithms

36 Multicast Support  Nimrod will also distinguish between a multicast group (i.e. a name for the set of members) and a particular multicast flow (i.e. a particular data distribution channel to that group).  There can be multiple flows which go to the same group, but controlled independently.

37 Flow Aggregation  Provides no extra capability to the users.  Adds complexity. So, why add it?  Because it is needed to allow a positive economy of scale in high-speed switches, since the unit cost of per-flow memory is higher there.  Flow aggregation also has the nice property that it works well with virtual links.

38 References  RFC 1992 – “The Nimrod Routing Architecture” found at http://www.ietf.org  Nimrod, A New Routing and Addressing Architecture for the Internet by J. Noel Chiappa

39 THANK YOU!


Download ppt "The Nimrod Architecture RFC 1992 Presented By Sai H. Lek October 2, 2003."

Similar presentations


Ads by Google