Presentation is loading. Please wait.

Presentation is loading. Please wait.

National Collaboratories Middleware Projects Ray Bair NC Program Coordinator January 15, 2002.

Similar presentations


Presentation on theme: "National Collaboratories Middleware Projects Ray Bair NC Program Coordinator January 15, 2002."— Presentation transcript:

1 National Collaboratories Middleware Projects Ray Bair NC Program Coordinator January 15, 2002

2 Topics  What is middleware?  Goals of NC Middleware research  What’s coming from the NC program  Interacting with NC projects  Tools you can use now  Appendix: More about middleware projects

3 Middleware: software that connects or mediates between two otherwise separate programs NC Middleware Program is focused on…  Technology to enable ubiquitous access to remote resources – computation, information and expertise  Capabilities that make it easier for distributed teams to work together, over the short and long term  Standard services and protocols for access to networked resources, that aid software development/interoperability  Middleware advances that enable scientific computing, e.g., high performance for scientific applications

4 Middleware Roles Portals and PSEs Applications The Grid Component Architectures Collaboratory Distributed Security Architectures Scientific Annotation Middleware Reliable/ Secure Communication Pervasive Collaborative Environment Group to Group Collaboration DataGrid Middleware Storage Resource Management CoG Kits Middleware for Science Portals

5 DOE NC Middleware Projects Realizing the Science Grid Tools for Distributed Applications

6 Realizing and Enhancing the Science Grid  Consistent access to resources and information  High performance data transport  Reliable and secure communication among peers  Tools to manage data  Tools to utilize metadata Metadata : information about data or other information

7 Distributed Security Architectures Mary Thompson, LBNL  Concept  Secure/flexible way to authorize access to distributed resources Based on signed Policy, Use-condition and Attribute certificates  Stakeholders flexibly set access policy on a per resource granularity  Existing Technologies  Akenti 1.0beta, Secure Akenti-enhanced web server beta  Sample secure ORBIX CORBA ORB with Akenti integration  Future Capabilities  Enhanced, secure Akenti-enabled Apache web server  Web interface to generate UseCondition/Attribute certificates  Proxy credential delegation capabilities  Integration with GSI – Grid Security Infrastructure

8 SciDAC DataGrid Middleware I. Foster, ANL; C. Kesselman, ISI; Miron Livny, UW  Concept  Innovative techniques for co-reservation of Grid compute, network, and storage resources, and market brokering services  High performance I/O, with intelligent, adaptive recovery  Efficient, distributed replica management to improve access efficiency  Existing Technologies  Globus Toolkit, including GridFTP, GRAM, … Condor  Future Capabilities  New Data Mover Technology supporting very high speed, reliable data movement, via new protocols  Data Transfer Management for orchestrating data transfers  Collective Data Management to support replication, mirroring, wide area hierarchical storage management

9 Reliable and Secure Group Communication Deb Agarwal, LBNL  Concept  Many-to-many group communication that scales to the Internet  Flexible message delivery in terms of reliability and ordering  Peer-to-peer, secure, reliable, ordered multicast  Existing Technologies  InterGroup Protocol Design, and prototype implementation  Future Capabilities  Group Security Layer, for secure group communications Creates an SSL equivalent for group communication  Software tools that support these protocols

10 Storage Resource Management for Data Grid App’s Arie Shoshani, LBNL; Don Petravick, FNAL  Concept  Grid Middleware to support dynamic storage management for long lasting simulation and analysis tasks in a distributed environment  Coordinate distributed disk caches by “pinning” of files  Existing Technology  Existing HPSS-HRM (Hierarchical Resource Manager) used by several HEP applications, based on logical file requests  Future Capabilities  Disk Resource Manager – manages a single shared disk cache  Storage Resource Manager – manages what data is on each storage device  Sophisticated pinning capabilities – manages requests to cache files

11 Scientific Annotation Middleware Jim Myers, Elena Mendoza, PNNL; Al Geist, Jens Schwidder, ORNL  Concept  Unified, lightweight metadata infrastructure to support the creation and use of metadata and annotations  Annotations shared among portals and problem solving environments, software agents, scientific applications, electronic notebooks, …  Existing Technology  DOE 2000 Electronic Notebooks  Future Capabilities  Search Tool, Metadata Viewer, Graphical Relationship Browser, Data Viewer, Data Signature Widget  Notebook Explorer & Viewer

12 Building Distributed Applications Collaboratories, Portals, and Problem-Solving Environments  Use standard application components in portals  Use Grid services from high level frameworks  Tools for constructing scientific workflow  Create virtual venues for collaborative work Portal : a science-oriented PSE, typically with a web browser interface, that allows scientists to compose and run distributed applications, or to access and analyze distributed data.

13 Middleware Technology to Support Science Portals Dennis Gannon, Randall Bramley, Indiana U.  Concept  A Science Portal that makes it easy to build and access Grid- based scientific applications from desktop web tools  Organized as a set of “Active Notebooks” with web forms to launch and control the application, as well as histories  Existing Technology  GCE-WG Portal Software Repository, Indiana Active Notebook  Future Capabilities  Active Documents – for science-centric input and analysis  Active Notebooks – contain grid-scripts and run histories, that can be modified and shared  Component composition of application services

14 SciDAC Commodity Grid (CoG) Kits Gregor von Laszewski, ANL; Keith Jackson, LBNL  Concept  Reusable “Web Services” that access underlying Grid services.  Support rapid development of Science Portals, Problem Solving Environments, and applications that access Grid resources  Existing Technology  CoG Kit prototype  Future Capabilities  CoG access to basic Grid services GRAM, MDS, Security, co-scheduling, …  Support to portal development teams  Components that are composable with a visual tool

15 Pervasive Collaborative Computing Environment Deb Agarwal, LBNL; Miron Livny, U Wisconsin  Concept  Collaboration tools that support the continuum of collaborative interaction in a persistent environment  Workflow tools that enable coordination of Grid computing processes and human tasks  Existing Technology  Range of DOE 2000 tools, university and commercial software  Collaborative Virtual Workspace – building/room metaphor  Future Capabilities  Building blocks for typical problems in scientific collaborations  Workflow Framework  Integrate messaging, Grid security, async. collaboration, Condor-G

16 Middleware to Support Group to Group Collaboration Rick Stevens, ANL  Concept  Collaborative work sharing beyond simple application sharing  High end visualization env’ts integrated into collaborative spaces  Extending asynchronous collaboration capabilities to embrace all types of data streams exchanged in a collaboration  Existing Technology  Access Grid  Future Capabilities  Scalable virtual venue serverImproved AG security model  Workspace Docking (app sharing)Easier node management  Tiled display interfacesMultimedia record/playback

17 How do we get some of these things? Can we influence the direction of projects?  Interaction and feedback are fundamental elements of the NC program.  All projects have dissemination plans.  Learn project details at tomorrow’s Poster Session.  Keep the dialogue going. http://DOEcollaboratory.pnl.gov

18 NC Project Interactions

19 Supplemental Information More about individual middleware projects

20 Middleware Project Contacts and Web Sites  Distributed Security Architectures  Mary Thompson, MRThompson@lbl.gov  High-Performance Data Grid Toolkit  Ian Foster, foster@mcs.anl.gov  Reliable and Secure Group Communication  Deb Agarwal, DAAgarwal@lbl.gov, www-itg.lbl.gov/CIF/GroupComm/  Storage Resource Management for Data Grid App’s  Arie Soshani, arie@lbl.gov, sdm.lbl.gov/srm  Scientific Annotation Middleware  Jim Myers, jim.myers@pnl.gov, www.emsl.pnl.gov:2080/docs/collab/sam/  Middleware Technology to Support Science Portals  Dennis Gannon, gannon@cs.indiana.edu  CoG Kits  Gregor von Laszewski, laszewski@mcs.anl.gov, www.cogkits.org  Pervasive Collaborative Computing Environment  Deb Agarwal, DAAgarwal@lbl.gov, www-itg.lbl.gov/Collaboratories/pcce.html  Middleware to Support Group-to-Group Collaboration  Rick Stevens, stevens@mcs.anl.gov, www-fp.mcs.anl.gov/fl/g2g/

21 Impact and Connections  IMPACT.  Provide an authorization service based on X.509 identity certificates, compatible with GSI/SSL connections, that can be easily used by distributed applications.  Resource stakeholders can flexibly set access policy on a per resource granularity  CONNECTIONS:  Used by the National Fusion Collaboratory to provide authorization for their distributed applications  Run as a server on the DOE Science Grid to allow applications running there to use it.  Used in an SBIR project, CoDeveloper which facilitates multi-domain code development. Milestones/Dates/Status  Release Akenti 1.1 DONE  libraries and standalone server  more flexible certificate formats  Recognize new style proxy X.509 certificates Yr -1  add code to verify certificate chains  Integration with GSI callouts Yr-2  allow Akenti authorization to replace grid-mapfile  Support for additional dynamic policies as required by Grid and collaboratory applications Yr 2-3 The Novel Ideas  Authorization to access distributed resources based on X.509 identity certificates and other signed certificates: Policy, Use- condition and Attribute certificates.  Multiple stakeholders remotely controlling access to distributed resources.  Authorization server easily called from a resource gatekeepers Principal Investigators: Mary Thompson, LBNL Funding Level : 3 FTEs Oct 23, 2001 MICS Program Manager: Mary Anne Scott Web Server LDAP Certificate Servers Akenti Internet File Servers Fetch Certificate Cache Manager Log Server DN Akenti Server Architecture Client DN Resource Server Distributed Security Architectures

22 Impact and Connections  IMPACT.  Wide spread acceptance of the protocols and services developed will insure interoperability of data grids.  High quality APIs and SDKs implementing these protocols will be provided to allow easier access to data grid tech.  Efficient, distributed replica management will improve data access efficiency  CONNECTIONS:  To be used by numerous SciDAC collaboratories, including DOE Science Grid, Particle Physics Data Grid, Earth Systems Grid, and Fusion Collaboratory. Uses Security Middleware components  Also to be used by many non-DOE projects worldwide, including NSF PACI DTF, NASA IPG, and EU DataGrid Milestones/Dates/Status Mon YrDONE  Deliver GridFTP clients and servers (non-striped)10/01  Deliver Replica Management Service04/02  Replica Management Service used in real data grid08/02  Distributed Replica Catalog01/03  Deliver Extended Resources Mgr & Info Svc.06/03  New data channel technologies demonstrated:10/03 Non-TCP, dynamic rate limiting, FEC, etc.  Replica Management Svc used w/ SRM for reliable,05/04 high performance, scheduled transfer  Demonstrate alternative data mgmt approaches10/05  Deliver Adv. DataGrid Mgmt systems w/ common10/06 SRM services The Novel Ideas  Develop new protocols taking advantage of unique properties of data grids  Develop innovative techniques for co-reservation of compute, network, and storage resources with guarantees for each stage.  Develop market brokering services to make efficient decisions in the face of constraints, when multiple resources are available to fulfill a request.  Investigate variants of two-phase I/O strategies used in parallel I/O systems for data transfer optimization  Develop intelligent, adaptive recovery and performance strategies based on knowledge of end to end routes and guarantees. Principal Investigators: I. Foster, ANL; C. Kesselman, ISI; Miron Livny, UW September 14, 2001 National Collaboratories Program MICS Program Manager: Mary Ann Scott SciDAC DataGrid Middleware

23 Impact and Connections  IMPACT:  Improved communication infrastructure for collaborative applications enabling truly peer-to-peer applications  Many-to-many group communication that scales to the Internet  A secure group layer that creates an SSL equivalent for group communication  Flexibility to implement a broad range of application requirements.  CONNECTIONS: Pervasive Collaborative Computing Environment Milestones/Dates/Status  The primary goal of this project is the development and implementation of group communication capabilities that are reliable and secure  Reliable Multicast Year - Development of InterGroup 1-2 - Beta release of the InterGroup protocol 2 - Testing and implementation of additional feature 2-4  Secure Group Layer: - Proofs of security for the cryptographic algorithms 1-2 - Implementation of protocols 2-4  Improvements - Enhancements to scalability and features 5 The Novel Ideas  Developing the infrastructure needed to support true peer-to-peer communication  Secure group communication that is peer-to-peer and based on crypto algorithms that are provably secure  Reliable multicast capabilities that are scalable to the Internet  Flexible message delivery options in terms of reliability and ordering Principal Investigators: Deb Agarwal - LBNL Picture/Diagram Related to Project Date Prepared MICS/SciDAC Program Name PI Org’s Logo MICS Program Manager: Mary Anne Scott Group Communication Reliable and Secure Group Communication

24 Impact and Connections  IMPACT  Provides essential component of Grid Middleware  Will enable the dynamic coordination of compute and storage resources  Support storage management for long lasting simulation and analysis tasks in a distributed environment  Manage job recovery from storage system and network failures, facilitating uninterrupted operation  CONNECTIONS  Particle Physics Data Grid  Earth Science Data Grid  Globus Grid projects Milestones/Dates/Status Mon Yr DONE  Design and develop SRM prototypes - Design functionality and interfacesOct 2001 done - Implement prototype Disk RM Jan 2002 - Implement prototype Tape RM Feb 2002  Deploy in PPDG STAR project - Initial installation at LBNLMar 2002 - Initial installation at BNLJun 2002  Robustness design and development - design Jan 2003 - development of robust SRMs Jun 2003 - deployment Jan 2004 The Novel Ideas  SRMs provide Grid Middleware to manage storage resources - Complement management of compute and network resources - Large files transfer can form the bottleneck  Coordinate distributed disk caches by “pinning” of files - Use “smart” replacement policies  Manage seamless access to tape storage - automate staging and archive requests in background - insulate client from hardware & network failures Principal Investigator: Arie Shoshani, LBNL Co-principal Investigator: Don Petravick, FNAL 9/7/2001 Collaboratory Middleware MICS Program Manager: Mary Ann Scott Storage Resource Management for Data Grid Applications

25 Impact and Connections  IMPACT:  Single unified metadata infrastructure  A service for next-generation scientific computing environments with significant reduction of integration barriers  An advanced notebook view of annotation data  CONNECTIONS: The project will work closely with interested Collaboratory Pilot projects including the Collaboratory for Multiscale Chemical Science, and will investigate integration/connection opportunities with other infrastructure efforts including those developed through the Scientific Data Management Center and Portal Middleware projects. External partnerships are also being investigated.  A lightweight, flexible middleware to support the creation and use of metadata and annotations  Layered architecture  Use of standard protocols  Sharing of annotations among portals and problem solving environments, software agents, scientific applications, and electronic notebooks  Support for arbitrary schema  Configurable schema translation and metadata extraction  Improved completeness, accuracy, and availability of the scientific record  Integrating annotation and records functionality with primary data stores Principal Investigators: Jim Myers, Elena Mendoza – PNNL Al Geist, Jens Schwidder – ORNL 11/02/01 National Collaboratories Program MICS Program Manager: Mary Anne Scott Proposed Timetable SpecificationAlpha Release1.0 Release1.5 Release Metadata Services 9/013/029/029/03 Semantic Services 12/017/0212/027/04 Notebook Services 3/0212/027/037/05 Interface Components 3/0212/027/037/05 Pedigree Schema 9/027/03 Notebook Interface 3/02 * 12/037/04 Novel Ideas Scientific Annotation Middleware

26 Impact and Connections  IMPACT.  Encapsulating Grid application into science portals will make grid-based solutions available to many more scientists than are currently using grid technology  Using component composition of application services provides the first simple model for Grid application programming.  This will eliminate many of the obstacles to both building and using grid-based applications.  CONNECTIONS: This work will use the common component technology from CCTTSS as well as SciDac collaboration and Grid technology. Numerous SciDac application communities will be approached to test the ideas. Milestones/Dates/Status  Milestone Category: System Design Mon Yr DONE - Notebook Archive 11 / 01 11 / 01 - Secure Access to grid services 11 / 01 11 / 01 - Advanced Scripting (LBNL interaction) 6 / 02 - Full multi-user, secure, server + archive 8 / 02 - DOE Science Grid integration 11 / 02  Milestone Category: Collaboration technology - Integration with DOE Notebook interface 8 / 02 - Integration with Access Grid Technology 1 / 03  Milestone Category: Applications - PPDG+ (Atlas/Griphyn) prototype 3 / 02 - ESG + NCSA environmental hydrology 5 / 02 - Other collaboratories 5/02 thru 3 / 03 The Novel Ideas  A Science Portal is tool that make it easy to access Grid-based scientific applications from simple desktop web tools.  The portal is organized as a set of “Active Notebooks” which contain the web forms needed to launch and control the application as well as histories of the user’s prior experiments with that application. This histories contain parameter used in the run and links to output files.  The execution of a notebook application is governed by a “grid script” for that notebook. Users may edit the scripts to create new application notebooks which can be shared with others. - Notebook applications may, and often do, link multiple, scientific grid “components” and services. Principal Investigators: Dennis Gannon, Randal Bramley, Department of Computer Science, Indiana University Date Prepared 1/02 MICS/SciDAC National Collaboratories and High Performance Networks: Middleware MICS Program Manager: Mary Ann Scott Middleware Technology to Support Science Portals: a Gateway to the Grid

27 Impact and Connections IMPACT.  Allow application developers to make use of Grid services from higher- level frameworks such as Java and Python.  Easier development of advanced Grid services.  Easier and more rapid application development.  Encourage code reuse, and avoid duplication of effort amongst the collaboratory projects.  Encourage the reuse of Web Services as part of the Grids. CONNECTIONS: We are working closely or as part of with the Globus research project, we work with a variety of major funded applications through SciDAC, NSF, en EU grants, E.g. DOE Science Grid, Earth Systems Grid, Supernova Factory, NASA IPG. The Novel Ideas Develop a common set of reusable components for accessing Grid services. Focus on supporting the rapid development of Science Portals, Problem Solving Environments, and science applications that access Grid resources. Develop and deploy a set of “Web Services” that access underlying Grid services. Integrate the Grid Security Infrastructure (GSI) into the “Web Services” model.  Provide access to higher level Grid services that are language independent and are described via commodity Web technologies such as WSDL.. Principal Investigators: Gregor von Laszewski, ANL Keith. Jackson, LBL 09/07/2001 MICS/SciDAC Program Name MICS Program Manager: Marry Ann Scott Milestones/Dates/Status  The main goal of this project is to create Software Development Kits in both Java and Python that allow easy access to Grid services.  Provide access to basic Grid services:Year - GRAM, MDS, Security, GridFTP1 - Replica Catalog, co-scheduling 1&2  Composable Components: - Develop guidelines for component development1 - Design and implement component hierarchies1&2 - Develop a component repository2&3  Web Services: - Integrate GSI 1 - Develop an initial set of useful web services1&2 Globus Toolkit Java based Grid Portals and Applications Java CoG Toolkit Python CoG Toolkit Commodity Python Tools and Services Commodity Java Tools and Services Portal High Energy Physics BiologyPSEChemistry Python IDE Earth Science Java IDE Java Distributed Programming Framework Java CoG Globus Service … … Composable CoG Components SciDAC CoG Kits

28 Impact and Connections  IMPACT:  A persistent operating environment that facilitates day-to- day operations within collaborations  Natural collaboration capabilities for solving computation- based problem  Readily available building blocks that focus on typical problems found in scientific collaborations  CONNECTIONS: DOE Science Grid and Supernova Factory Milestones/Dates/Status  The primary goal of this project is the development of the Pervasive Collaborative Computing Environment  Workflow development: Year - Installation of Condor-G 1 - Develop underlying services 1&2  Collaboration capabilities: - Integration of Grid services 1 - Secure human interaction environment 2 - Support for asynchronous collaboration 3  Integration of components: 3 - The Novel Ideas  Focus on providing collaboration tools that enable connectivity and collaboration on a day-by-day basis  Develop workflow tools that will enable coordination of Grid computing processes and human tasks in a workflow framework  Leverage off of the Grid computing environment (e.g. security and directory services)  Support the continuum of collaborative interaction Principal Investigators: Deb Agarwal – LBNL, Miron Livny - UW Date Prepared MICS/SciDAC Program Name PI Org’s Logo MICS Program Manager: Mary Anne Scott Pervasive Collaborative Computing Environment

29 Impact and Connections  IMPACT.  Wide-spread deployment and use of high-end collaboration technologies to further scientific inquiry  Advances in our understanding of the effects of distance based collaboration environments on group dynamics and communication quality  Extending asynchronous collaboration capabilities to embrace all types of data streams exchanged in a collaboration, with synchronized capture and playback  CONNECTIONS: SciDAC collaboratory pilot projects. SciDAC Software Centers, Grid middleware for discovery, security and information services. Other scientific collaborations Milestones/Dates/Status Milestones for End of year 1  Venues Services - V1.0 Architecture document, Prototype - Access Control Architecture - Docking Architecture, API  Display: - Node Management Architecture, Software V1.0 release - Xplit Prototype, Architecture white paper  Asynchronous Collaboration tools: - Software Architecture Definition documents - New media type plugins for existing tools - Generalized Voyager server V2.0 release The Novel Ideas  Peer-to-peer Virtual Venues servers to enable worldwide, secure virtual communities through the use of high-end collaboration environments  Collaborative work sharing beyond simple application sharing  Integration of High end visualization environments into collaborative spaces  Methods of asynchronous collaboration: capture, synchronization, record, playback and annotation of collaborative experiences Principal Investigators: Rick Stevens, Argonne National Laboratory Picture/Diagram Related to Project 9/13/2001 MICS/SciDAC Middleware MICS Program Manager: Mary Anne Scott Middleware to Support Group to Group Collaboration

30 Out Takes

31 Distributed Security Architectures Mary Thompson, LBNL  Novel Ideas  Secure and flexible way to authorize access to distributed resources Based on signed Policy, Use-condition and Attribute certificates.  Multiple stakeholders remotely control access to resources.  Authorization server easily called from a resource gatekeepers  Impact  An authorization service, based on X.509 identity certificates and compatible with GSI/SSL connections, that can be easily used by distributed applications.  Resource stakeholders can flexibly set access policy on a per resource granularity.

32 Distributed Security Architectures Mary Thompson, LBNL  Novel Ideas  Secure and flexible way to authorize access to distributed resources Based on signed Policy, Use-condition and Attribute certificates.  Multiple stakeholders remotely control access to resources.  Authorization server easily called from a resource gatekeepers  Impact  An authorization service, based on X.509 identity certificates and compatible with GSI/SSL connections, that can be easily used by distributed applications.  Resource stakeholders can flexibly set access policy on a per resource granularity. Web Server LDAP Certificate Servers Akenti Internet File Servers Fetch Certificate Cache Manager Log Server DN Client DN Resource Server

33 SciDAC DataGrid Middleware I. Foster, ANL; C. Kesselman, ISI; Miron Livny, UW  Novel Ideas  New protocols take advantage of unique properties of data grids  Innovative techniques for co-reservation of compute, network, and storage resources, and market brokering services  Variants of two-phase I/O strategies  Intelligent, adaptive recovery and performance strategies  Impact  Common protocols & services will insure interoperability of data grids  APIs and SDKs implementing these protocols will be provided to allow easier access to data grid technology  Efficient, distributed replica management will improve data access efficiency

34 Reliable and Secure Group Communication Deb Agarwal, LBNL  Novel Ideas  Infrastructure to support true peer-to-peer communication  Secure peer-to-peer group communication  Reliable multicast capabilities that are scalable to the Internet  Flexible message delivery in terms of reliability and ordering  Impact  Flexible communication infrastructure for collaborative applications that are truly peer-to-peer  Many-to-many group communication that scales to the Internet  Group layer that creates an SSL equivalent for group communication

35 Storage Resource Management for Data Grid App’s Arie Shoshani, LBNL; Don Petravick, FNAL  Novel Ideas  Grid Middleware to manage storage resources  Coordinate distributed disk caches by “pinning” of files  Manage seamless access to tape storage  Impact  Provides essential component of Grid Middleware  Will enable dynamic coordination of compute and storage resources  Support storage management for long lasting simulation and analysis tasks in a distributed environment  Manage job recovery from storage system and network failures, facilitating uninterrupted operation

36 Storage Resource Management for Data Grid App’s Arie Shoshani, LBNL; Don Petravick, FNAL  Novel Ideas  Grid Middleware to manage storage resources  Coordinate distributed disk caches by “pinning” of files  Manage seamless access to tape storage  Impact  Provides essential component of Grid Middleware  Will enable dynamic coordination of compute and storage resources  Support storage management for long lasting simulation and analysis tasks in a distributed environment  Manage job recovery from storage system and network failures, facilitating uninterrupted operation

37 Scientific Annotation Middleware Jim Myers, Elena Mendoza, PNNL; Al Geist, Jens Schwidder, ORNL  Novel Ideas  Lightweight, flexible middleware to support the creation and use of metadata and annotations  Sharing of annotations among portals and problem solving environments, software agents, scientific applications, and electronic notebooks  Single unified metadata infrastructure  Impact  Improved completeness, accuracy, and availability of the scientific record  Significant reduction of integration barriers  An advanced notebook view of annotation data

38 Scientific Annotation Middleware Jim Myers, Elena Mendoza, PNNL; Al Geist, Jens Schwidder, ORNL  Novel Ideas  Lightweight, flexible middleware to support the creation and use of metadata and annotations  Sharing of annotations among portals and problem solving environments, software agents, scientific applications, and electronic notebooks  Single unified metadata infrastructure  Impact  Improved completeness, accuracy, and availability of the scientific record  Significant reduction of integration barriers  An advanced notebook view of annotation data Electronic Notebook Interface Applications Agents Problem Solving Environments Scientific Annotation Middleware Notebook Services records mgmt., annotation timestamps, signatures import/export/archive Search & Semantic Navigation Services Metadata Management Services Data Archives Web Components and Service Interfaces Data Store Interface

39 Building Distributed Applications Collaboratories, Portals, and Problem-Solving Environments  Use standard application components in portals  Use Grid services from high level frameworks  Tools for constructing scientific workflow  Create virtual venues for collaborative work Portal : a science-oriented PSE, typically with a web browser interface, that allows scientists to compose and run distributed applications, or to access and analyze distributed data.

40 Middleware Technology to Support Science Portals Dennis Gannon, Randall Bramley, Indiana U.  Novel Ideas  A Science Portal that makes it easy to access Grid-based scientific applications from simple desktop web tools  Organized as a set of “Active Notebooks” with web forms to launch and control the application, as well as histories  Execution is governed by a “grid script” for that notebook  Impacts  Encapsulating Grid app’s into science portals will make grid- based solutions easier to build and available to more scientists  Using component composition of application services provides the first simple model for Grid application programming

41 SciDAC Commodity Grid (CoG) Kits Gregor von Laszewski, ANL; Keith Jackson, LBNL  Novel Ideas  Common set of reusable components for accessing Grid services  Support rapid development of Science Portals, Problem Solving Environments, and applications that access Grid resources  “Web Services” that access underlying Grid services.  Impact  Allow use of Grid services from higher-level frameworks  Easier development of advanced Grid services  Easier and more rapid application development  Encourage code reuse, and reuse of Web Services

42 SciDAC Commodity Grid (CoG) Kits Gregor von Laszewski, ANL; Keith Jackson, LBNL  Novel Ideas  Common set of reusable components for accessing Grid services  Support rapid development of Science Portals, Problem Solving Environments, and applications that access Grid resources  “Web Services” that access underlying Grid services.  Impact  Allow use of Grid services from higher-level frameworks  Easier development of advanced Grid services  Easier and more rapid application development  Encourage code reuse, and reuse of Web Services Globus Toolkit Java based Grid Portals and Applications Java CoG Toolkit Python CoG Toolkit Commodity Python Tools and Services Commodity Java Tools and Services Portal High Energy Physics BiologyPSEChemistry Python IDE Earth Science Java IDE Java Distributed Programming Framework Java CoG Globus Service … … Composable CoG Components

43 Pervasive Collaborative Computing Environment Deb Agarwal, LBNL; Miron Livny, U Wisconsin  Novel Ideas  Collaboration tools that enable connectivity and collaboration on a day-by-day basis  Workflow tools that enable coordination of Grid computing processes and human tasks  Support the continuum of collaborative interaction  Impact  A persistent operating environment that facilitates day-to-day operations within collaborations  Natural collaboration capabilities for computation-based problems  Building blocks for typical problems in scientific collaborations

44 Middleware to Support Group to Group Collaboration Rick Stevens, ANL  Novel Ideas  Peer-to-peer Virtual Venues servers to enable worldwide, secure virtual communities via high-end collaboration env’ts  Collaborative work sharing beyond simple application sharing  High end visualization env’ts integrated into collaborative spaces  Methods of asynchronous collaboration  Impact  Wide-spread deployment and use to further scientific inquiry  Advances in our understanding of the effects of distance based collaboration environments on group dynamics  Extending asynchronous collaboration capabilities to embrace all types of data streams exchanged in a collaboration

45 How do we get some of these things? Can we influence the direction of projects?  Feedback is essential. Middleware PIs want to interact with you !  All projects have dissemination plans.  Learn project details at tomorrow’s Poster Session.  Keep the dialogue going.

46 What collaboratory and middleware tools can we use today?  To Work Together  Conferencing : Access Grid, H.323, ISDN, NetMeeting, MSN Messenger, vic/vat, ImmersaDesk  eMail Lists : majordomo, mailman  Shared Documents : Web, NFS, ELN, EN, Notes  Shared Display : NetMeeting, Access Grid, vnc, SameTime  Code Repository : cvs  To Build Distributed Applications  Grid Services : Globus, Condor, Legion, Harness, Cactus, CoG  Authentication Certificates : Netscape, Akenti, Globus

47

48

49


Download ppt "National Collaboratories Middleware Projects Ray Bair NC Program Coordinator January 15, 2002."

Similar presentations


Ads by Google