INFN GRID Workshop, Bari, October Servizi di rete e Grid: caratteristiche e scenari applicativi Network services and Compute Grids: Capabilities and Use Cases INFN CNAF
INFN GRID Workshop, Bari, October Outline Quality of Service – Layer-3 technologies – Layer-1 lambda technologies Security and privacy – Layer-1/2/3 Virtual Private Networks – 10 GE WAN PHY Conclusions
INFN GRID Workshop, Bari, October Requirements I: Quality of Service
INFN GRID Workshop, Bari, October Quality of Service Application requirements -File transfer with deadline and high-throughput file transfer: guaranteed bandwidth packet loss high reliability use of enhanced TCP stack implementations and other non TCP-friendly transport protocols -Remote visualization, data correlation, remote instrument control (GRIDCC) packet loss one-way delay and delay variation Middleware Requirements - communication between various Grid services: low delay packet loss high reliability - on-demand guaranteed bandwidth (e.g. workload management service)
INFN GRID Workshop, Bari, October Quality of Service: how (1/3) Layer-3 IP-based Quality of Service (Differentiated Services) – different traffic classes are distinguished by a code point in the IP header (Differentiated Services Code Point) – Traffic conditioning: Packet classification Marking Scheduling (traffic of different classes assigned to different queues) Policing and scheduling – Complex network engineering needed – Offered today by GEANT, the European research backbone and a few NRNs IP Premium (low delay, low packet loss, guaranteed bandwidth) Less than Best Effort (low priority traffic, bandwidth usage can range from 1% to 100%) – On-demand configuration: technically possible but not supported today EGEE JRA1 and JRA4
INFN GRID Workshop, Bari, October Quality of Service: how (2/3) Dedicated (on-demand) layer-1 connectivity – Multiple wavelenghts can coexist on the same fiber strand, each wavelength can be used as a dedicated point-to-point connection: up to 128 parallel paths at 10 Gbit (data transmission rate per fiber: 1.28 Tbit/s) – framing: SONET (Synchronous Optical Networks) or SDH (Synchronous Digital Hierarchy) for LONG DISTANCE GigaEthernet/SONET/SDH for SHORT DISTANCE – Importance of owning dark fiber – minimization of hardware costs (multiple communication channels per physical interface are possible) – Dynamic set-up: protocol standardization is ongoing, e.g. Generalized Multiprotocol Label Switching (GMPLS) at the IETF. Inter-domain set-up is still a research field Wavelenght bandwidth allocation: Optical Burst Switching (OBS) for a finer sub-lambda bandwidth allocation
INFN GRID Workshop, Bari, October Quality of Service: how (3/3) Dedicated (on-demand) layer-1 connectivity (cont) – very useful for data-intensive applications (e.g., data movement, data replication) – it allows use of non-TCP transport protocols – supports traffic isolation packet loss is reduced – Can reduce traffic at potential network bottlenecks (e.g. HEP Tier-0 sites) if for example, the WMS can trigger a file transfer to a SE close to a CE of choice, with guaranteed bandwidth
INFN GRID Workshop, Bari, October Global Lambda Integrated Facility (GLIF)* GLIF is a consortium of institutions, organizations, consortia and country National Research & Education Networks who voluntarily share optical networking resources and expertise to develop the Global LambdaGrid for the advancement of scientific collaboration and discovery GLIF is under the leadership of SURFnet and University of Amsterdam in The Netherlands. (*) Maxine Brown, Uni. of Illinois at Chicago
INFN GRID Workshop, Bari, October GLIF (Sept 2004)
INFN GRID Workshop, Bari, October CANET and SURFnet6 CANARIE: User-Controlled Light Paths (UCLP) – Lambdas are allocated to users to create ad-hoc network infrastructures (completely managed by the user) between few sites with specific network requirements – SONET, DWDM, Optical Cross-Connects SURFNET 6 (beginning 2006): – Institutions mainly connected through dark fiber – Core: dark fiber owned by SURFnet, lambdas on DWDM core – Access speed: 2 x 10 Gbps (IP) and a few 2.5 or 10 Gbit lambdas – Access: IP over DWDM using POS framing and GigaEthernet framing (1 GE and 10 GE) – IP services in 5 PoPs (AVICI routers) – Layer-1 connectivity: optical cross-connects at the border
INFN GRID Workshop, Bari, October Typical Large system today (*) Sensor Instrument Sensor Layer 2 switch Layer 3 switch/router SONET/DWDM Process SONET/DWDM Grid Security Web Services OGSA Internet VPN USER Instrument Pod Instrument (*) Bill st. Arnauld, Canarie (Terena Networking Conference, June 2004)
INFN GRID Workshop, Bari, October Network recursive architecture with web service work flow bindings (*) USER Sensor Instrument Sensor Layer 2/3 switch LAN Data Management System CA*net 4 VPN Instrument Pod WS* WS CA*net 4 Lightpath Process WS** WS* Process WS** WS* Process WS Web service Interface *CANARIE UCLP **New web services HPC (*) Bill st. Arnauld (Canarie), Terena Networking Conference, June 2004 (a)an Ethernet switch (b)GbE port on a SDH multiplexer (e.g. the ONS15454) (a)(c) a transponder of DWDM transport gear (e.g. ONS15252).
INFN GRID Workshop, Bari, October SURFnet 6Provisioning of IP Services (*) Avici SSR External IP connectivity SURFnet6 Core Routers SURFnet6 Border Routers SURFnet6 Layer 2 / Layer 1 network 10 Gigabit Ethernet Customer Avici SSR Avici SSR Avici SSR Non-SURFnet SURFnet infrastructure OM 5000 DWDM Passport 8600 GE switch OM 5000 DWDM 10 GE OM 5000 DWDM 10 GE OME 6500 CPE 1 Gigabit Ethernet Customer OME 6500 OME 6500 OME 6500 CPE 1 GE RPR OME 6500 (*) Kees Neggers, Internet2 International Task Force, Apr 2004
INFN GRID Workshop, Bari, October Provisioning of Light Paths (*) International Light Path connectivity SURFnet6 Sites in Amsterdam SURFnet6 Layer 2 / Layer 1 network Customer equipment Non-SURFnet SURFnet infrastructure OME 6500 Optical Switch 16x16 MEMS OME 6500 OME GE 16x16 MEMS OME GE 10 GE Customer equipment Regional Light Path 10 GE LAN (*) Kees Neggers, Internet2 International Task Force, Apr 2004
INFN GRID Workshop, Bari, October National LambdaRail (NLR)- USA (*) Dark Fiber National footprint: Obtained fiber (initial build from Level 3, second stage includes other providers) – 20 year IRU’s Serves network research and very high-end experimental and research applications 4 x 10GB Wavelengths initially Capable of 40 10Gb wavelengths at build-out NLR supports Production and Experimental (breakable) infrastructures at each layer (1,2, and 3) (*) John Silvster, Terena Networking Conference, June 2004
INFN GRID Workshop, Bari, October Denver Seattle Sunnyvale LA San Diego Chicago Pitts Wash DC Raleigh Jacksonville Atlanta KC Baton Rouge El Paso - Las Cruces Phoenix Pensacola Dallas San Ant. Houston Albuq. Tulsa New York Clev QWEST LEVEL 3 AT&T WILTEL NLR Phase 1 and 2 (*) (*) John Silvster, Terena Networking Conference, June 2004
INFN GRID Workshop, Bari, October Requirements II: Security and Privacy
INFN GRID Workshop, Bari, October Security is an inherent requirement for any Grid service Security and privacy can be also required by Grid applications. Virtual Private Networks connecting members with mutual trust relationship (e.g., the members of a given VO) can be used as a means to deliver security and privacy to its members, for example when data-access protocols do not provide integrity and confidentiality. Virtual Private Networks
INFN GRID Workshop, Bari, October Virtual Private Networks: how VPN : – “a generic term used to refer to the capability of both private and public networks to support a communication infrastructure connecting geographically dispersed sites where users can communicate among them as if they were in a private network” (RFC 2764) – VPNs can support data isolation by separating, for each VPN, the forwarding control plane, the signalling and the routing information in the intermediate forwarding devices Layer-3: – They interconnect sets of hosts and routers based on Layer-3 addresses (e.g. IP addresses) Layer-2: – They emulate the functionality of a Local Area Network in a wide area environment Layer-1: – They connect a number of Customer Edge devices with point-to-point connections operated at Layer-1, based either on optical or Time Division Multiplexing network infrastructures.
INFN GRID Workshop, Bari, October Layer-2 VPNs Layer-2 VPNs can help Grids to bypass firewalls to avoid performance penalties for data-intensive applications. They can be used to temporarily group geographically dispersed resources that belong to the same Grid Virtual Organization (group of users with same resource sharing policies). Layer-2 VPNs can be used to connect local devices (instrumentation, Grid resources etc) to remote Grid sites.
INFN GRID Workshop, Bari, October MPLS-based Layer-2 VPNs MPLS: already supported by GEANT, GARR and other European NRNs Succesfully tested in DataTAG between Bologna, CERN and Karsruhe Performance on production paths: sporadic packet loss, generally good (920 Mbit/s memory-to-memory, end-systems connected at 1 Gbit/s) A given host can belong to one or more VPNs at a time if native VLAN tagging is enabled The LSP primary/secondary path can apply non-standard routing policies A given diffserv packet forwarding treatment can be assigned to the LSPs associated to a given VPN (MPLS EXP field set by the LSP head-end router): – Grid ftp between SEs: if based on enhanced TCP stacks, it can be handled through the Scavenger/Less Than Best Effort service (fairness) – CEs/SEs used for remote visualization with real-time requirements could apply to the IP Premium service – Performance guarantees to individual VOs
INFN GRID Workshop, Bari, October Gigabit Ethernet WAN PHY IEEE 802.3ae: ethernet capable of spanning world-wide distances – No Carrier Sense Multiple Access (CSMA/CD), full duplex only – Two types of transceiver: LAN PHY: Gbit/s data rate, 600 Km max distance without regenerators, transmission rate incompatible with WAN infrastructures WAN PHY: (STS-192c) compatible with SONET/SDH in terms of data rate and encapsulation – WAN PHY tests: Over DWDM Over SONET circuit (through ONS 15454) Test sites: CERN, NIKHEF, Ottawa Steady 5.4 Gbit/s TCP throughput (end-system limited, memory-to- memory) CERN- NL Steady 5.67 Gbit/s Ottawa - CERN
INFN GRID Workshop, Bari, October Conclusions What network services for INFN GRID? – Layer-3 Quality of Service, for data-intensive applications – On-demand bandwidth tools: EGEE JRA1, ongoing – Layer-2 VPNs: more work on application scenarios needed – Layer-1 and 2: After DataTAG, lack of dedicated high-speed test infrastructures. Connectivity to GLIF? Lambda services to CERN and other Tier-1 sites: – How? – When? joint research program with GARR! – 10 GE WAN PHY very promising, applicability to INFN GRID to be investigated