Creating a Sustainable Cycle of Innovation Harvey B Newman, Caltech WSIS Pan European Regional Ministerial Conference Bucharest, November 7-9 2002 WSIS.

Slides:



Advertisements
Similar presentations
International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
Advertisements

Jorge Gasós Grid Technologies Unit European Commission The EU e Infrastructures Programme Workshop, Beijing, June 2005.
Information Society Technologies programme 1 IST Programme - 8th Call Area IV.2 : Computing Communications and Networks Area.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
High Performance Computing Course Notes Grid Computing.
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Facilities Based Networking Corporation for Education Networking Initiatives in California (CENIC)
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Symposium on Knowledge Environments for Science: HENP Collaboration & Internet2 Douglas Van Houweling President & CEO, Internet2/UCAID November 26, 2002.
Introduction and Overview “the grid” – a proposed distributed computing infrastructure for advanced science and engineering. Purpose: grid concept is motivated.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Other servers Java client, ROOT (analysis tool), IGUANA (CMS viz. tool), ROOT-CAVES client (analysis sharing tool), … any app that can make XML-RPC/SOAP.
October 2003 Iosif Legrand Iosif Legrand California Institute of Technology.
Harvey B Newman Harvey B Newman FAST Meeting, Caltech FAST Meeting, Caltech July 1, HENP.
The Challenges of Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Present and Future Networks an HENP Perspective Present and Future Networks an HENP Perspective Harvey B. Newman, Caltech HENP WG Meeting Internet2 Headquarters,
Peer to Peer & Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
DISTRIBUTED COMPUTING
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Harvey B Newman, Caltech Harvey B Newman, Caltech Optical Networks and Grids Meeting the Advanced Network Needs of Science May 7, 2002 Optical Networks.
ACAT 2003 Iosif Legrand Iosif Legrand California Institute of Technology.
Harvey B. Newman, Caltech Harvey B. Newman, Caltech Internet2 Virtual Member Meeting March 19, 2002 Internet2 Virtual Member Meeting March 19, 2002http://l3www.cern.ch/~newman/HENPGridsNets_I2Virt ppt.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
High Energy Physics: Networks & Grids Systems for Global Science High Energy Physics: Networks & Grids Systems for Global Science Harvey B. Newman Harvey.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
InSAR Working Group Meeting, Oxnard, Oct IT discussion Day 1 afternoon breakout session
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Harvey B Newman, Caltech Harvey B Newman, Caltech Optical Networks and Grids Meeting the Advanced Network Needs of Science May 7, 2002 Optical Networks.
SCIC in the WSIS Stocktaking Report (July 2005): uThe SCIC, founded in 1998 by ICFA, is listed in Section.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
University of Illinois at Chicago StarLight: Applications-Oriented Optical Wavelength Switching for the Global Grid at STAR TAP Tom DeFanti, Maxine Brown.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7.
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
National LambdaRail/ Florida LambdaRail Larry Conrad Associate VP and CIO, FSU Board Chair, Florida LambdaRail, LLC.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
Realizing the Promise of Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science.
1 Kostas Glinos European Commission - DG INFSO Head of Unit, Géant and e-Infrastructures "The views expressed in this presentation are those of the author.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Transporting High Energy Physics Experiment Data over High Speed Genkai/Hyeonhae on 4 October 2002 at Oita Korea-Kyushu Gigabit Network.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
Bob Jones EGEE Technical Director
Clouds , Grids and Clusters
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
California Institute of Technology
CERN-USA connectivity update DataTAG project
Wide Area Networking at SLAC, Feb ‘03
EGI Webinar - Introduction -
Wide-Area Networking at SLAC
Presentation transcript:

Creating a Sustainable Cycle of Innovation Harvey B Newman, Caltech WSIS Pan European Regional Ministerial Conference Bucharest, November WSIS Pan European Regional Ministerial Conference Bucharest, November Global Virtual Organizations for Data Intensive Science

Challenges of Data Intensive Science and Global VOs è Geographical dispersion: of people and resources è Scale: Tens of Petabytes per year of data è Complexity: Scientic Instruments and information Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Managing globally distributed computing & data resources Cooperative software development and physics analysis New Forms of Distributed Systems: Data Grids

Emerging Data Grid User Communities Grid Physics Projects (GriPhyN/iVDGL/EDG) ATLAS, CMS, LIGO, SDSS; BaBar/D0/CDF NSF Network for Earthquake Engineering Simulation (NEES) Integrated instrumentation, collaboration, simulation Access Grid; VRVS: supporting new modes of group-based collaboration And Genomics, Proteomics,... The Earth System Grid and EOSDIS Federating Brain Data Computed MicroTomography … Virtual Observatories Grids are Having a Global Impact on Research in Science & Engineering

Global Networks for HENP and Data Intensive Science uNational and International Networks with sufficient capacity and capability, are essential today for è The daily conduct of collaborative work in both experiment and theory èData analysis by physicists from all world regions èThe conception, design and implementation of next generation facilities, as “global (Grid) networks” u“Collaborations on this scale would never have been attempted, if they could not rely on excellent networks” – L. Price, ANL u Grids Require Seamless Network Systems with Known, High Performance

Data volume Moore’s law High Speed Bulk Throughput BaBar Example [and LHC] u Driven by: è HENP data rates, e.g. BaBar ~500TB/year, Data rate from experiment >20 MBytes/s; [5-75 Times More at LHC] è Grid of Multiple regional computer centers (e.g. Lyon-FR, RAL-UK, INFN-IT, CA: LBNL, LLNL, Caltech) need copies of data u Need high-speed networks and the ability to utilize them fully è High speed Today = 1 TB/day (~100 Mbps Full Time) è Develop TB/day Capability (Several Gbps Full Time) within the next 1-2 years Data Volumes More than Doubling Each Yr; Driving Grid, Network Needs

HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade; We are Rapidly Learning to Use and Share Multi-Gbps Networks

AMS-IX Internet Exchange Thruput Accelerating Growth in Europe (NL) Monthly Traffic 4X Growth In 14 Months 8/01 – 10/02 ↓ 0 5 Gbps 10 Gbps Hourly Traffic 11/02/02 2 Gbps 8 Gbps 6 Gbps 4 Gbps HENP & World BW Growth: 3-4 Times Per Year; 2 to 3 Times Moore’s Law

National Light Rail Footprint Terminal, Regen or OADM site Fiber route PIT POR FRE RAL WAL NAS PHO OLG ATL CHI CLE KAN OGD SAC BOS NYC WDC STR DAL DEN LAX SVL SEA SDG NLR  Buildout Starts November 2002  Initially 4 10 Gb Wavelengths  To 40 10Gb Waves in Future NREN Backbones reached Gbps in 2002 in Europe, Japan and US; US: Transition now to optical, dark fiber, multi-wavelength R&E network

Distributed System Services Architecture (DSSA): CIT/Romania/Pakistan u Agents: Autonomous, Auto- discovering, self-organizing, collaborative u “Station Servers” (static) host mobile “Dynamic Services” u Servers interconnect dynamically; form a robust fabric in which mobile agents travel, with a payload of (analysis) tasks u Adaptable to Web services: OGSA; and many platforms u Adaptable to Ubiquitous, mobile working environments StationServer StationServer StationServer LookupService LookupService Proxy Exchange Registration Service Listener Lookup Discovery Service Remote Notification Managing Global Systems of Increasing Scope and Complexity, In the Service of Science and Society, Requires A New Generation of Scalable, Autonomous, Artificially Intelligent Software Systems

By I. Legrand (Caltech)  Deployed on US CMS Grid  Agent-based Dynamic information / resource discovery mechanism  Implemented in  Java/Jini; SNMP  WDSL / SOAP with UDDI  Part of a Global “ Grid Control Room ” Service  MonaLisa: A Globally Scalable Grid Monitoring System

History - Throughput Quality Improvements from US to World Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) (1) 80% annual improvement Factor ~100/8 yr Progress, but the Digital Divide is Maintained: Action is Required

NREN Core Network Size (Mbps-km): Logarithmic Scale 1k 100k M 10M 1M 10k Ro It Pl Gr Ir Ukr Hu Cz Es Nl Fi Ch Lagging In Transition Leading Advanced Perspectives on the Digital Divide: Int’l, Local, Regional, Political

Building Petascale Global Grids: Implications for Society u Meeting the challenges of Petabyte-to-Exabyte Grids, and Gigabit-to-Terabit Networks, will transform research in science and engineering u These developments could create the first truly global virtual organizations (GVO) u If these developments are successful, and deployed widely as standards, this could lead to profound advances in industry, commerce and society at large è By changing the relationship between people and “persistent” information in their daily lives è Within the next five to ten years u Realizing the benefits of these developments for society, and creating a sustainable cycle of innovation compels us è TO CLOSE the DIGITAL DIVIDE

Recommendations uTo realize the Vision of Global Grids, governments, international institutions and funding agencies should: èDefine IT international policies (for instance AAA) èSupport establishment of international standards èProvide adequate funding to continue R&D in Grid and Network technologies èDeploy international production Grid and Advanced Network testbeds on a global scale èSupport education and training in Grid & Network technologies for new communities of users èCreate open policies, and encourage joint development programs, to help Close the Digital Divide uThe WSIS RO meeting, starting today, is an important step in the right direction

Some Extra Slides Follow

IEEAF: Internet Educational Equal Access Foundation; Bandwidth Donations for Research and Education

Next Generation Requirements for Physics Experiments u Rapid access to event samples and analyzed results drawn from massive data stores è From Petabytes in 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012. u Coordinating and managing the large but LIMITED computing, data and network resources effectively u Persistent access to physicists throughout the world, for collaborative work Grid Reliance on Networks u Advanced applications such as Data Grids rely on seamless operation of Local and Wide Area Networks è With reliable, quantifiable high performance

Networks, Grids and HENP u Grids are changing the way we do science and engineering u Next generation 10 Gbps network backbones are here: in the US, Europe and Japan; across oceans è Optical Nets with many 10 Gbps wavelengths will follow u Removing regional, last mile bottlenecks and compromises in network quality are now All on the critical path u Network improvements are especially needed in SE Europe, So. America; and many other regions: è Romania; India, Pakistan, China; Brazil, Chile; Africa u Realizing the promise of Network & Grid technologies means è Building a new generation of high performance network tools; artificially intelligent scalable software systems è Strong regional and inter-regional funding initiatives to support these ground breaking developments

Closing the Digital Divide What HENP and the World Community Can Do uSpread the message: ICFA SCIC, IEEAF et al. can help uHelp identify and highlight specific needs (to Work On) èPolicy problems; Last Mile problems; etc. uEncourage Joint programs [Virtual Silk Road project; Japanese links to SE Asia and China; AMPATH to So. America] è NSF & LIS Proposals: US and EU to South America uMake direct contacts, arrange discussions with gov’t officials è ICFA SCIC is prepared to participate where appropriate uHelp Start, Get Support for Workshops on Networks & Grids è Encourage, help form funded programs uHelp form Regional support & training groups [Requires Funding]

LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL: 200k SI95; 600 TB IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~ MBytes/sec Gbps 0.1–10 Gbps Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~ Gbps Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1

Two centers are trying to work as one: -Data not duplicated -Internationalization -transparent access, etc Tier A "Physicists have indeed foreseen to test the GRID principles starting first from the Computing Centres in Lyon and Stanford (California). A first step towards the ubiquity of the GRID." Le Monde 12 april 2001 CERN-US Line + Abilene Renater + ESnet 3/2002 LHC Grid Wkshop 3/02; 2003: to 1 Gbps range 0.5 PB and UP; LHC 10 to 100 Times Greater

ARGONNE  CHICAGO Why Grids? l 1,000 physicists worldwide pool resources for petaop analyses of petabytes of data l A biochemist exploits 10,000 computers to screen 100,000 compounds in an hour l Civil engineers collaborate to design, execute, & analyze shake table experiments l Climate scientists visualize, annotate, & analyze terabyte simulation datasets l An emergency response team couples real time data, weather model, population data

ARGONNE  CHICAGO Why Grids? (contd) l Scientists at a multinational company collaborate on the design of a new product l A multidisciplinary analysis in aerospace couples code and data in four companies l An HMO mines data from its member hospitals for fraud detection l An application service provider offloads excess load to a compute cycle provider l An enterprise configures internal & external resources to support e-business workload

ARGONNE  CHICAGO Grids: Why Now?  Moore’s law improvements in computing produce highly functional endsystems  The Internet and burgeoning wired and wireless provide universal connectivity  Changing modes of working and problem solving emphasize teamwork, computation  Network exponentials produce dramatic changes in geometry and geography  9-month doubling: double Moore’s law!  : x340,000; : x4000?

A Short List: Revolutions in Information Technology (2002-7) u Scalable Data-Intensive Metro and Long Haul Network Technologies è DWDM: 10 Gbps then 40 Gbps per ; 1 to 10 Terabits/sec per fiber è 10 Gigabit Ethernet (See 10GbE / 10 Gbps LAN/WAN integration è Metro Buildout and Optical Cross Connects è Dynamic Provisioning  Dynamic Path Building ­ “Lambda Grids” u Defeating the “Last Mile” Problem (Wireless; or Ethernet in the First Mile) è 3G and 4G Wireless Broadband (from ca. 2003); and/or Fixed Wireless “Hotspots” è Fiber to the Home è Community-Owned Networks

ARGONNE  CHICAGO Grid Architecture “Talking to things”: Communication (Internet protocols) & security “Sharing single resources”: Negotiating access, controlling use “Coordinating multiple resources”: ubiquitous infrastructure services, app- specific distributed services “Controlling things locally”: Access to, & control of resources Connectivity Resource Collective Application Fabric Internet Transport Appli- cation Link Internet Protocol Architecture More info:

u Grid projects have been a step forward for HEP and LHC: a path to meet the “LHC Computing” challenges  But: the differences between HENP Grids and classical Grids are not yet fully appreciated u The original Computational and Data Grid concepts are largely stateless, open systems: known to be scalable  Analogous to the Web u The classical Grid architecture has a number of implicit assumptions  The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness)  Short transactions; Relatively simple failure modes u HEP Grids are data-intensive and resource constrained  Long transactions; some long queues  Schedule conflicts; [policy decisions]; task redirection  A Lot of global system state to be monitored+tracked LHC Distributed CM: HENP Data Grids Versus Classical Grids

Upcoming Grid Challenges: Building a Globally Managed Distributed System uMaintaining a Global View of Resources and System State èEnd-to-end System Monitoring èAdaptive Learning: new paradigms for execution optimization (eventually automated) uWorkflow Management, Balancing Policy Versus Moment-to-moment Capability to Complete Tasks èBalance High Levels of Usage of Limited Resources Against Better Turnaround Times for Priority Jobs èGoal-Oriented; Steering Requests According to (Yet to be Developed) Metrics uRobust Grid Transactions In a Multi-User Environment uRealtime Error Detection, Recovery èHandling User-Grid Interactions: Guidelines; Agents uBuilding Higher Level Services, and an Integrated User Environment for the Above

 ( Physicists’) Application Codes  Experiments’ Software Framework Layer  Needs to be Modular and Grid-aware: Architecture able to interact effectively with the Grid layers  Grid Applications Layer (Parameters and algorithms that govern system operations)  Policy and priority metrics  Workflow evaluation metrics  Task-Site Coupling proximity metrics  Global End-to-End System Services Layer  Monitoring and Tracking Component performance  Workflow monitoring and evaluation mechanisms  Error recovery and redirection mechanisms  System self-monitoring, evaluation and optimization mechanisms Interfacing to the Grid: Above the Collective Layer

NL SURFnet GENEVA UK SuperJANET4 ABILENE ESNET CALREN It GARR-B GEANT NewYork Fr Renater STAR-TAP STARLIGHT DataTAG Project u EU-Solicited Project. CERN, PPARC (UK), Amsterdam (NL), and INFN (IT); and US (DOE/NSF: UIC, NWU and Caltech) partners u Main Aims: è Ensure maximum interoperability between US and EU Grid Projects è Transatlantic Testbed for advanced network research u 2.5 Gbps Wavelength Triangle 7/02 (10 Gbps Triangle in 2003) Wave Triangle

TeraGrid ( NCSA, ANL, SDSC, Caltech NCSA/UIUC ANL UIC Multiple Carrier Hubs Starlight / NW Univ Ill Inst of Tech Univ of Chicago Indianapolis (Abilene NOC) I-WIRE Caltech San Diego DTF Backplane: 4 X 10 Gbps Abilene Chicago Indianapolis Urbana OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) Source: Charlie Catlett, Argonne A Preview of the Grid Hierarchy and Networks of the LHC Era

Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF )  DataTAG 2.5 Gbps Research Link in Summer 2002  10 Gbps Research Link by Approx. Mid-2003 Transoceanic Networking Integrated with the Abilene, TeraGrid, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America Baseline evolution typical of major HENP links

HENP As a Driver of Networks: Petascale Grids with TB Transactions u Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores u Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. u Example: Take 800 secs to complete the transaction. Then u Transaction Size (TB) Net Throughput (Gbps) u 1 10 u u (Capacity of Fiber Today) u Summary: Providing Switching of 10 Gbps wavelengths within ~3 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”, as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.

National Research Networks in Japan u SuperSINET è Started operation January 4, 2002 è Support for 5 important areas: HEP, Genetics, Nano-Technology, Space/Astronomy, GRIDs è Provides 10 ’s: r 10 Gbps IP connection r 7 Direct intersite GbE links r Some connections to 10 GbE in JFY2002 u HEPnet-J Will be re-constructed with MPLS-VPN in SuperSINET Will be re-constructed with MPLS-VPN in SuperSINET Proposal: Two TransPacific 2.5 Gbps Wavelengths, and Japan-CERN Grid Testbed by ~2003 Proposal: Two TransPacific 2.5 Gbps Wavelengths, and Japan-CERN Grid Testbed by ~2003 Tokyo Osaka Nagoya Internet Osaka U Kyoto U ICR Kyoto-U Nagoya U NIFS NIG KEK Tohoku U IMS U-Tokyo NAO U Tokyo NII Hitot. NII Chiba IP WDM path IP router OXC ISAS

National R&E Network Example Germany: DFN TransAtlantic Connectivity Q STM 4 STM 16 u 2 X 2.5G Now: NY-Hamburg and NY-Frankfurt u ESNet peering at 34 Mbps u Direct Peering to Abilene and Canarie expected u UCAID will add another 2 OC48’s; Proposing a Global Terabit Research Network (GTRN) u FSU Connections via satellite: Yerevan, Minsk, Almaty, Baikal è Speeds of kbps u SILK Project (2002): NATO funding è Links to Caucasus and Central Asia (8 Countries) è Currently kbps è Propose VSAT for X BW: NATO + State Funding

RNP Brazil (to 20 Mbps) FIU Miami/So. America (to 80 Mbps)

Models Of Networked Analysis At Regional Centers)  The simulation program developed within MONARC (Models Of Networked Analysis At Regional Centers) uses a process- oriented approach for discrete event simulation, and provides a realistic modelling tool for large scale distributed systems. Modeling and Simulation: MONARC System SIMULATION of Complex Distributed Systems for LHC

Farm Monitor Client (other service) Lookup Service Lookup Service Farm Monitor Discovery Proxy uComponent Factory uGUI marshaling uCode Transport uRMI data access Push & Pull rsh & ssh scripts; snmp Globally Scalable Monitoring Service I. Legrand RC Monitor Service Registration

MONARC SONN: 3 Regional Centres Learning to Export Jobs NUST 20 CPUs CERN 30 CPUs CALTECH 25 CPUs 1MB/s ; 150 ms RTT 1.2 MB/s 150 ms RTT 0.8 MB/s 200 ms RTT Day = 9 = 0.73 = 0.66 = 0.83 By I. Legrand

COJAC: CMS ORCA Java Analysis Component: Java3D Objectivity JNI Web Services Demonstrated Caltech-Rio de Janeiro (Feb.) and Chile

Internet2 HENP WG [*] u Mission: To help ensure that the required è National and international network infrastructures (end-to-end) è Standardized tools and facilities for high performance and end-to-end monitoring and tracking [Gridftp; bbcp…] è Collaborative systems u are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community. è To carry out these developments in a way that is broadly applicable across many fields u Formed an Internet2 WG as a suitable framework: October 2001 u [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Sec’y J. Williams (Indiana) u Website: also see the Internet2 End-to-end Initiative:

A Short List: Coming Revolutions in Information Technology u Storage “Virtualization” [ A Single Logical Resource] è Grid-enabled Storage Resource Middleware (SRM) è iSCSI (Internet Small Computer Storage Interface); Integrated with 10 GbE  Global File Systems u Internet Information Software Technologies è Global Information “Broadcast” Architecture ­ E.g the Multipoint Information Distribution Protocol è Programmable Coordinated Agent Architectures ­ E.g. Mobile Agent Reactive Spaces (MARS) by Cabri et al., University of Modena u The “Data Grid” - Human Interface è Interactive monitoring and control of Grid resources ­ By authorized groups and individuals ­ By Autonomous Agents

Palat Telefoane 1G link 1G backup link Romana Victoriei Gara de Nord Eroilor Izvor Universitate Unirii NOC Cat L3 C7206 w Gigabit C7513 w Gigabit Cat4000 L3 Sw Bucharest MAN for Ro-Grid ICI IFIN 100Mbps 10/100/1000Mbps

RoEdu Network