Download presentation
Presentation is loading. Please wait.
1
Harvey B Newman, Professor of Physics Harvey B Newman, Professor of Physics LHCNet PI, US CMS Collaboration Board Chair WAN In Lab Site Visit, Caltech Meeting the Advanced Network Needs of Science March 5, 2003 WAN In Lab Site Visit, Caltech Meeting the Advanced Network Needs of Science March 5, 2003 HENP Grids and Networks Global Virtual Organizations HENP Grids and Networks Global Virtual Organizations
2
Computing Challenges: Petabyes, Petaflops, Global VOs è Geographical dispersion: of people and resources è Complexity: the detector and the LHC environment è Scale: Tens of Petabytes per year of data 5000+ Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Managing globally distributed computing & data resources Cooperative software development and physics analysis New Forms of Distributed Systems: Data Grids
3
Next Generation Networks for Experiments: Goals and Needs u Providing rapid access to event samples, subsets and analyzed physics results from massive data stores è From Petabytes by 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012. u Providing analyzed results with rapid turnaround, by coordinating and managing the large but LIMITED computing, data handling and NETWORK resources effectively u Enabling rapid access to the data and the collaboration è Across an ensemble of networks of varying capability u Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs è With reliable, monitored, quantifiable high performance Large data samples explored and analyzed by thousands of globally dispersed scientists, in hundreds of teams
4
First Beams : April 2007 Physics Runs: July 2007 27 Km ring 1232 dipoles B=8.3 T (NbTi at 1.9 K) TOTEM CMS and ATLAS: pp, general purpose CMS and ATLAS: pp, general purpose LHCb : pp, B-physics ALICE : heavy ions, p-ions pp s = 14 TeV L design = 10 34 cm -2 s -1 Heavy ions (e.g. Pb-Pb at s ~ 1000 TeV) LHC
5
ATLAS CMS US The US provides about 20-25% of the author list in both experiments LHC Collaborations
6
US ATLAS US CMS US LHC INSTITUTIONS Accelerator
7
Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2008) (~2013 ?) for the LHC Experiments 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2008) (~2013 ?) for the LHC Experiments
8
10 9 events/sec, selectivity: 1 in 10 13 (1 person in a thousand world populations) LHC: Higgs Decay into 4 muons (Tracker only); 1000X LEP Data Rate
9
LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL: 200k SI95; 600 TB IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~100-1500 MBytes/sec 2.5-10 Gbps 0.1–10 Gbps Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~2.5-10 Gbps Tier2 Center ~2.5-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1
10
Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] u [*] Installed BW. Maximum Link Occupancy 50% Assumed See http://gate.hep.anl.gov/lprice/TAN
11
History – One large Research Site Current Traffic ~400 Mbps; ESNet Limitation Projections: 0.5 to 24 Tbps by ~2012 Much of the Traffic: SLAC IN2P3/RAL/INFN; via ESnet+France; Abilene+CERN
12
* Also see http://www-iepm.slac.stanford.edu/monitoring/bulk/; and the Internet2 E2E Initiative: http://www.internet2.edu/e2e Progress: Max. Sustained TCP Thruput on Transatlantic and US Links 8-9/01 105 Mbps 30 Streams: SLAC-IN2P3; 102 Mbps 1 Stream CIT-CERN 11/5/01 125 Mbps in One Stream (modified kernel): CIT-CERN 1/09/02 190 Mbps for One stream shared on 2 155 Mbps links 3/11/02 120 Mbps Disk-to-Disk with One Stream on 155 Mbps link (Chicago-CERN) 5/20/02 450-600 Mbps SLAC-Manchester on OC12 with ~100 Streams 6/1/02 290 Mbps Chicago-CERN One Stream on OC12 (mod. Kernel) 9/02 850, 1350, 1900 Mbps Chicago-CERN 1,2,3 GbE Streams, OC48 Link 11-12/02 FAST: 940 Mbps in 1 Stream SNV-CERN; 9.4 Gbps in 10 Flows SNV-Chicago
13
US-CERN OC48 Deployment u Phase One u Phase two Cisco 7606 OC12 (Production) OC48 (Develop and Test) American partners European partners Caltech (DOE) PoP in Chicago CERN - Geneva OC48 (Develop and Test) Cisco 7609 CERN American partners OC12 (Production) European partners Caltech (DOE) PoP in Chicago CERN - Geneva Cisco 7609 Cisco 7609 Cisco 7606 Cisco 7609
14
OC-48 deployment (Cont’d) u Phase three (Late 2002) OC48 2.5 Gbps Cisco 7606 Caltech (DoE) Juniper M10 Caltech (DoE) Alcatel 7770 DataTAG (CERN) American partners European partners CERN Cisco 7609 Caltech (DoE) OC12 622 Mbps Alcatel 7770 DataTAG (CERN) Optical Mux/Dmux Datag (CERN) Cisco 7606 DataTAG (CERN) Juniper M10 DataTAG (CERN) Cisco 7609 DataTAG (CERN) Optical Mux/Dmux Datag (CERN) u Separate environments for tests and production u Transatlantic testbed dedicated to advanced optical network research and intensive data access applications
15
NL SURFnet GENEVA UK SuperJANET4 ABILENE ESNET CALREN 2 It GARR-B GEANT NewYork Fr INRIA STAR-TAP STARLIGHT DataTAG Project u EU-Solicited Project. CERN, PPARC (UK), Amsterdam (NL), and INFN (IT); and US (DOE/NSF: UIC, NWU and Caltech) partners u Main Aims: è Ensure maximum interoperability between US and EU Grid Projects è Transatlantic Testbed for advanced network research u 2.5 Gbps Wavelength Triangle from 7/02; to 10 Gbps Triangle by Early 2003 Wave Triangle 2.5 to 10G 10 G Atrium VTHD
16
LHCnet Network : March 2003 Development and tests AbileneMRENESnetSTARTAPNASA Linux PC for Performance tests & Monitoring CERN -Geneva GEANTSwitchIN2P3WHO Linux PC for Performance tests & Monitoring Caltech/DoE PoP – StarLight Chicago OC12 – 622 Mbps Cisco 7606 CERN Cisco 7606 Caltech (DoE) Alcatel 7770 DataTAG (CERN) Cisco 7609 DataTAG (CERN) Juniper M10 DataTAG (CERN) Cisco 7609 Caltech (DoE) Juniper M10 Caltech (DoE) Alcatel 7770 DataTAG (CERN) OC48 – 2,5 Gbps Optical Mux/Dmux Alcatel 1670 Optical Mux/Dmux Alcatel 1670
17
FAST (Caltech): A Scalable, “Fair” Protocol for Next-Generation Networks: from 0.1 To 100 Gbps URL: netlab.caltech.edu/FAST SC2002 11/02 Experiment Sunnyval e Baltimore Chicago Geneva 3000km 1000km 7000km C. Jin, D. Wei, S. Low FAST Team & Partners Internet: distributed feedback system R f (s) R b ’ (s) p TCP AQM Theory 22.8.02 IPv6 9.4.02 1 flow 29.3.00 multiple Baltimore-Geneva Baltimore-Sunnyvale SC2002 1 flow SC2002 2 flows SC2002 10 flows I2 LSR Sunnyvale-Geneva Standard Packet Size 940 Mbps single flow/GE card 9.4 petabit-m/sec 1.9 times LSR 9.4 Gbps with 10 flows 37.0 petabit-m/sec 6.9 times LSR 22 TB in 6 hours; in 10 flows Implementation Sender-side (only) mods Delay (RTT) based Stabilized Vegas Highlights of FAST TCP Next: 10GbE; 1 GB/sec disk to disk
18
TeraGrid (www.teragrid.org) NCSA, ANL, SDSC, Caltech, PSC NCSA/UIUC ANL UIC Multiple Carrier Hubs Starlight / NW Univ Ill Inst of Tech Univ of Chicago Indianapolis (Abilene NOC) I-WIRE Caltech San Diego DTF Backplane: 4 X 10 Gbps Abilene Chicago Indianapolis Urbana OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) Source: Charlie Catlett, Argonne A Preview of the Grid Hierarchy and Networks of the LHC Era
19
National Light Rail Footprint 15808 Terminal, Regen or OADM site Fiber route NLR Buildout Started November 2002 Initially 4 10G Wavelengths To 40 10G Waves in Future Transition now to optical, multi-wavelength R&E networks: US, Europe and Intercontinental (US-China-Russia) Initiatives; Efficient use of Wavelengths is an Essential Part of this Picture PIT POR FRE RAL WAL NAS PHO OLG ATL CHI CLE KAN OGD SAC BOS NYC WDC STR DAL DEN LAX SVL SEA SDG JAC
20
HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade; We are Rapidly Learning to Use and Share Multi-Gbps Networks
21
HENP Lambda Grids: Fibers for Physics u Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores u Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. u Example: Take 800 secs to complete the transaction. Then u Transaction Size (TB) Net Throughput (Gbps) u 1 10 u 10 100 u 100 1000 (Capacity of Fiber Today) u Summary: Providing Switching of 10 Gbps wavelengths within ~3-5 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”, as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.
22
Emerging Data Grid User Communities u NSF Network for Earthquake Engineering Simulation (NEES) è Integrated instrumentation, collaboration, simulation u Grid Physics Network (GriPhyN) è ATLAS, CMS, LIGO, SDSS u Access Grid; VRVS: supporting group-based collaboration And u Genomics, Proteomics,... u The Earth System Grid and EOSDIS u Federating Brain Data u Computed MicroTomography … è Virtual Observatories
23
COJAC: CMS ORCA Java Analysis Component: Java3D Objectivity JNI Web Services Demonstrated Caltech-Rio de Janeiro (2/02) and Chile (5/02)
24
CAIGEE u NSF ITR – 50% so far u Lightweight, functional, making use of existing software AFAP u Plug-in Architecture based on Web Services u Expose Grid “Global System” to physicists – at various levels of detail with Feedback u Supports Request, Preparation, Production, Movement, Analysis of Physics Object Collections u Initial Target: Californian US-CMS physicists u Future: Whole US CMS and CMS u CMS Analysis – an Integrated Grid Enabled Environment CIT, UCSD, Riverside, Davis; + UCLA, UCSB
25
Clarens The Clarens Remote (Light- Client) Dataserver: a WAN system for remote data analysis Clarens servers are deployed at Caltech, Florida, UCSD, FNAL, Bucharest; Extend to UERJ in Rio (CHEPREO) SRB now installed as Clarens service on Caltech Tier2 (Oracle backend) HTTP/SOAP/RPC Server
26
Develop and build Dynamic Workspaces Build Private Grids to support scientific analysis communities Using Agent Based Peer-to-peer Web Services Construct Autonomous Communities Operating Within Global Collaborations Empower small groups of scientists (Teachers and Students) to profit from and contribute to int’l big science Drive the democratization of science via the deployment of new technologies NSF ITR: Globally Enabled Analysis Communities
27
Dynamic Workspaces Provide capability for individual and sub-community to request and receive expanded, contracted or otherwise modified resources, while maintaining the integrity and policies of the Global Enterprise Private Grids Provide capability for individual and community to request, control and use a heterogeneous mix of Enterprise wide and community specific software, data, meta-data, resources Build on Global Managed End-to-end Grid Services Architecture; Monitoring System Autonomous, Agent-Based, Peer-to-Peer NSF ITR: Key New Concepts
28
Private Grids and P2P Sub- Communities in Global CMS
29
A Global Grid Enabled Collaboratory for Scientific Research (GECSR) u A joint ITR proposal from u Caltech (HN PI,JB:CoPI) u Michigan (CoPI,CoPI) u Maryland (CoPI) u and Senior Personnel from u Lawrence Berkeley Lab u Oklahoma u Fermilab u Arlington (U. Texas) u Iowa u Florida State u The first Grid-enabled Collaboratory: Tight integration between è Science of Collaboratories, è Globally scalable working environment è A Sophisticated Set of Collaborative Tools (VRVS, VNC; Next-Gen) è Agent based monitoring and decision support system (MonALISA)
30
GECSR u Initial targets are the global HENP collaborations, but GESCR is expected to be widely applicable to other large scale collaborative scientific endeavors u “Giving scientists from all world regions the means to function as full partners in the process of search and discovery” The importance of Collaboration Services is highlighted in the Cyberinfrastructure report of Atkins et al. 2003
31
Current Grid Challenges: Secure Workflow Management and Optimization u Maintaining a Global View of Resources and System State è Coherent end-to-end System Monitoring è Adaptive Learning: new algorithms and strategies for execution optimization (increasingly automated) u Workflow: Strategic Balance of Policy Versus Moment-to-moment Capability to Complete Tasks è Balance High Levels of Usage of Limited Resources Against Better Turnaround Times for Priority Jobs è Goal-Oriented Algorithms; Steering Requests According to (Yet to be Developed) Metrics u Handling User-Grid Interactions: Guidelines; Agents u Building Higher Level Services, and an Integrated Scalable User Environment for the Above
32
14000+ Hosts; 8000+ Registered Users in 64 Countries 56 (7 I2) Reflectors Annual Growth 2 to 3X
33
By I. Legrand (Caltech) Deployed on US CMS Grid Agent-based Dynamic information / resource discovery mechanism Talks w/Other Mon. Systems Implemented in Java/Jini; SNMP WDSL / SOAP with UDDI Part of a Global “Grid Control Room” Service MonaLisa: A Globally Scalable Grid Monitoring System
34
Distributed System Services Architecture (DSSA): CIT/Romania/Pakistan u Agents: Autonomous, Auto- discovering, self-organizing, collaborative u “Station Servers” (static) host mobile “Dynamic Services” u Servers interconnect dynamically; form a robust fabric in which mobile agents travel, with a payload of (analysis) tasks u Adaptable to Web services: OGSA; and many platforms u Adaptable to Ubiquitous, mobile working environments StationServer StationServer StationServer LookupService LookupService Proxy Exchange Registration Service Listener Lookup Discovery Service Remote Notification Managing Global Systems of Increasing Scope and Complexity, In the Service of Science and Society, Requires A New Generation of Scalable, Autonomous, Artificially Intelligent Software Systems
35
MONARC SONN: 3 Regional Centres Learning to Export Jobs (Day 9) NUST 20 CPUs CERN 30 CPUs CALTECH 25 CPUs 1MB/s ; 150 ms RTT 1.2 MB/s 150 ms RTT 0.8 MB/s 200 ms RTT Day = 9 = 0.73 = 0.66 = 0.83 By I. Legrand
36
Networks, Grids, HENP and WAN-in-Lab u Current generation of 2.5-10 Gbps network backbones arrived in the last 15 Months in the US, Europe and Japan è Major transoceanic links also at 2.5 - 10 Gbps in 2003 è Capability Increased ~4 Times, i.e. 2-3 Times Moore’s u Reliable high End-to-end Performance of network applications (large file transfers; Grids) is required. Achieving this requires: è A Deep understanding of Protocol Issues, for efficient use of wavelengths in the 1 to 10 Gbps range now, and higher speeds (e.g. 40 to 80 Gbps) in the near future è Getting high performance (TCP) toolkits in users’ hands è End-to-end monitoring; a coherent approach u Removing Regional, Last Mile Bottlenecks and Compromises in Network Quality are now On the critical path, in all regions è We will Work in Concert with AMPATH, Internet2, Terena, APAN; DataTAG, the Grid projects and the Global Grid Forum u A WAN in Lab facility, available to the Community, is a Key Element in achieving these revolutionary goals
37
Building Petascale Global Grids: Implications for Society u Meeting the challenges of Petabyte-to-Exabyte Grids, and Gigabit-to-Terabit Networks, will transform research in science and engineering u These developments will create the first truly global virtual organizations (GVO) u If these developments are successful this could lead to profound advances in industry, commerce and society at large è By changing the relationship between people and “persistent” information in their daily lives è Within the next five to ten years
38
Some Extra Slides Follow
39
Global Networks for HENP u National and International Networks, with sufficient (rapidly increasing) capacity and capability, are essential for è The daily conduct of collaborative work in both experiment and theory è Detector development & construction on a global scale; Data analysis involving physicists from all world regions è The formation of worldwide collaborations è The conception, design and implementation of next generation facilities as “global networks” u “Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”
40
The Large Hadron Collider (2007-) u The Next-generation Particle Collider è The largest superconductor installation in the world u Bunch-bunch collisions at 40 MHz, Each generating ~20 interactions è Only one in a trillion may lead to a major physics discovery u Real-time data filtering: Petabytes per second to Gigabytes per second u Accumulated data of many Petabytes/Year
41
QuarkNet has 50 centers nationwide (60 planned) Education and Outreach Each center has: 2-6 physicist mentors 2-12 teachers* * Depending on year of the program and local variations
42
Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF) US-CERN Link: 622 Mbps this month DataTAG 2.5 Gbps Research Link in Summer 2002 10 Gbps Research Link by Approx. Mid-2003 Transoceanic Networking Integrated with the Abilene, TeraGrid, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America Baseline evolution typical of major HENP links 2001-2006
43
A transatlantic testbed u Multi platforms è Vendor independent è Interoperability tests è Performance tests u Multiplexing of optical signals into a single OC- 48 transatlantic optical channel u Layer 2 services: è Circuit-Cross-Connect (CCC) è Layer 2 VPN u IP services: è Multicast è IPv6 è QoS u Future: GMPLS è GMPLS is an extension of MPLS
44
Service Implementation example 2,5 Gb/s Optical multiplexer 2,5 Gb/s GbE IBGP BGP OSPF VLAN 600 CCC VLAN 601Optical multiplexer 10 GbE Abilene & ESnet BGP GEANT CERN Network StarLight- ChicagoCERN - Geneva Host 1 Host 2 Host 3 Host 2 Host 3 Host 1
45
Logical view of previous implementation 2,5 Gb/s POS IP over POS IP over MPLS over POSVLAN 601 VLAN 600 VLAN 601 Ethernet over POS Ethernet over MPLS over POSVLAN 600 Abilene & ESnetGEANT CERN Network Host 1 Host 2 Host 3 Host 1 Host 2 Host 3
46
Using Web Services for Tag Data (Example) Use ~180,000 Tag objects derived from di-jet ORCA events Each Tag: run & event number, OID of ORCA event object, then E,Phi,Theta,ID for 5 most energetic particles, and E,Phi,Theta for 5 most energetic jets These Tag events have been used in various performance tests and demonstrations, e.g. SC2000, SC2001, comparison of Objy vs RDBMS query speeds (GIOD), as source data for COJAC, etc.
47
COJAC – The CMS ORCA Java Analysis Component u Written in Java2; uses Java3D for high-speed rendering, rotating, zooming, panning and lighting of the CMS detector geometry. u Java Native Interface (JNI) methods are used to extract the latest CMS geometry. The geometry can be displayed at various detail levels, and all individual detectors can be included or excluded. u The Objectivity ODBMS Java binding is used to access a remote (or local) database of simulated particle collision events. Each event is rendered in Java3D, and can be overlayed on the CMS detector geometry. u A Network Test feature allows one to continually fetch simulated event objects from the server until a preselected transfer size has been reached. This gives an idea of the speed at which Analysis tools such as COJAC can operate in a WAN environment.
48
DB-Independent Access to Object Collections: Middleware Prototype u First layer ODBC provides database vendor abstraction, allowing any relational (SQL) database to be plugged into the system. u Next layer OTL provides an encapsulation of the results of a SQL query in a form natural to C++, namely STL (standard template library) map and vector objects. u Higher levels map C++ object collections to the client’s required format, and transport the results to the client.
49
Reverse Engineer and Ingest the CMS JETMET nTuples From the nTuple description: è Derived an ER diagram for the content è An AOD for the JETMET analysis We then wrote tools to: è Automatically generate a set of SQL CREATE TABLE commands to create the RDBMS tables è Generate SQL INSERT and bulk load scripts that enable population of the RDBMS tables We imported the ntuples into è SQLServer at Caltech è Oracle 9i at Caltech è Oracle 9i at CERN è PostgreSQL at Florida In the future: è Generate a “Tag” table of data that captures the most often used data columns in the nTuple
50
GAE Demonstrations u iGrid2002 (Amsterdam)
51
Clarens (Continued) u Servers è Multi-process (based on Apache), using XML/RPC è Similar, but using SOAP è Lightweight (using select loop) single-process è Functionality: file access (read/download/part/whole), directory listing, file selection, file checksumming, access to SRB, security with PKI/VO infrastructure, RDBMS data selection/analysis, MonaLisa integration u Clients è ROOT client … browsing remote file repositories, files è Platform-independent Python-based client for rapid prototyping è Browser-based Java/Javascript client (in planning) … for use when no other client packages is desired è Some clients of historical interest e.g. Objectivity u Source/Discussion u http://clarens.sourceforge.net http://clarens.sourceforge.net u Future è Expose POOL as remote data source in Clarens è Clarens peer-to-peer discovery and communication
52
( Physicists’) Application Codes Experiments’ Software Framework Layer Modular and Grid-aware: Architecture able to interact effectively with the lower layers Grid Applications Layer (Parameters and algorithms that govern system operations) Exert Strategy and Policy (Resource Use; Turnaround) Workflow evaluation metrics Task-Site Coupling proximity metrics Global End-to-End System Services Layer Monitoring and Tracking Component performance Workflow monitoring and evaluation mechanisms Error recovery and redirection mechanisms System self-monitoring, evaluation and optimization mechanisms (e.g. SONN) Application Architecture: Interfacing to the Grid
53
Models Of Networked Analysis At Regional Centers) The simulation program developed within MONARC (Models Of Networked Analysis At Regional Centers) uses a process- oriented approach for discrete event simulation, and provides a realistic modelling tool for large scale distributed systems. Modeling and Simulation: MONARC System (I. Legrand) SIMULATION of Complex Distributed Systems for LHC
54
Farm Monitor Client (other service) Lookup Service Lookup Service Registration Farm Monitor Discovery Proxy uComponent Factory uGUI marshaling uCode Transport uRMI data access Push & Pull rsh & ssh scripts; snmp Globally Scalable Monitoring Service I. Legrand RC Monitor Service
55
CHEP 2001, Beijing Harvey B Newman California Institute of Technology September 6, 2001 NSF/ITR: A Global Grid-Enabled Collaboratory for Scientific Research (GECSR) and Grid Analysis Environment (GAE)
56
GECSR Features u Persistent Collaboration: desktop, small and large conference rooms, halls, Virtual Control Rooms u Hierarchical, Persistent, ad-hoc peer groups (using Virtual Organization management tools) u “Language of Access”: an ontology and terminology for users to control the GECSR. è Example: cost of interrupting an expert, virtual open, and partly-open doors. u Support for Human- System (Agent)-Human as well as Human- Human interactions u Evaluation, Evolution and Optimisation of the GECSR u Agent-based decision support for users u The GECSR will be delivered in “packages” over the course of the (four-year) project
57
GECSR: First Year Package u NEESGrid: unifying interface and tool launch system è CHEF Framework: portlets for file transfer using GridFTP, teamlets, announcements, chat,shared calendar, role-based access,threaded discussions,document repository è www.chefproject.org www.chefproject.org u GIS-GIB: a geographic-information-systems-based Grid information broker u VO Management tools (developed for PPDG) u Videoconferencing and shared desktop: VRVS and VNC (www.vrvs.org and www.vnc.org) www.vrvs.orgwww.vnc.orgwww.vrvs.orgwww.vnc.org u MonaLisa: real time system monitoring and user control u The above tools already exist: the effort is in integrating and identifying the missing functionality.
58
GECSR: Second Year Package u Enhancements, including: u Detachable windows u Web Services Definition (WSDL) for CHEF u Federated Collaborative Servers u Search capabilities u Learning Management Components u Grid Computational Portal Toolkit u Knowledge Book, HEPBook u Software Agents for intelligent searching etc. u Flexible Authentication
59
u Mobile Agents: (Semi)-Autonomous, Goal Driven, Adaptive è Execute Asynchronously è Reduce Network Load: Local Conversations è Overcome Network Latency; Some Outages è Adaptive Robust, Fault Tolerant è Naturally Heterogeneous è Extensible Concept: Coordinated Agent Architectures Beyond Traditional Architectures: Mobile Agents “Agents are objects with rules and legs” -- D. Taylor Application Service Agent
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.