NG07 summary: Grid state of art, solution, infrastructure, and KnowARC topics Weizhong Qiang November 2, 2007.

Slides:



Advertisements
Similar presentations
Security Design and Solution in ARC1 Weizhong Qiang University of Oslo April 9, 2008.
Advertisements

March 6 th, 2009 OGF 25 Unicore 6 and IPv6 readiness and IPv6 readiness
Grid Standardization from the NorduGrid/ARC perspective Balázs Kónya, Lund University, Sweden NorduGrid Technical Coordinator ETSI Grid Workshop on Standardization,
GT 4 Security Goals & Plans Sam Meder
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
VO Support and directions in OMII-UK Steven Newhouse, Director.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
MTA SZTAKI Hungarian Academy of Sciences Grid Computing Course Porto, January Introduction to Grid portals Gergely Sipos
Seminar Grid Computing ‘05 Hui Li Sep 19, Overview Brief Introduction Presentations Projects Remarks.
CoreGRID Workpackage 5 Virtual Institute on Grid Information and Monitoring Services Authorizing Grid Resource Access and Consumption Erik Elmroth, Michał.
Minimum intrusion GRID. Build one to throw away … So, in a given time frame, plan to achieve something worthwhile in half the time, throw it away, then.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Minimum intrusion GRID. Build one to throw away … So, in a given time frame, plan to achieve something worthwhile in half the time, throw it away, then.
1-2.1 Grid computing infrastructure software Brief introduction to Globus © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid computing course. Modification.
4b.1 Grid Computing Software Components of Globus 4.0 ITCS 4010 Grid Computing, 2005, UNC-Charlotte, B. Wilkinson, slides 4b.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
Globus Computing Infrustructure Software Globus Toolkit 11-2.
Kate Keahey Argonne National Laboratory University of Chicago Globus Toolkit® 4: from common Grid protocols to virtualization.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Grid Engine Riccardo Rotondo
EMI INFSO-RI EMI roadmap (development plan) Balázs Kónya (Lund University, Sweden) EMI Technical Director 25 th October 2010, Brussels, EGI-TCB.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Standards landscape and ARC development plans Péter Stefán KnowARC WP3 + NIIF.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
© 2008 Open Grid Forum Independent Software Vendor (ISV) Remote Computing Primer Steven Newhouse.
Why do we need PGI? Shahbaz Memon Jülich Supercomputing Centre (JSC)
© 2006 Open Grid Forum Enabling Pervasive Grids The OGF GIN Effort Erwin Laure GIN-CG co-chair, EGEE Technical Director
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Tools for collaboration How to share your duck tales…
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
NW-GRID Campus Grids Workshop Liverpool31 Oct 2007 NW-GRID Campus Grids Workshop Liverpool31 Oct 2007 Moving Beyond Campus Grids Steven Young Oxford NGS.
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
Grid Security: Authentication Most Grids rely on a Public Key Infrastructure system for issuing credentials. Users are issued long term public and private.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Oleg LODYGENSKY Etienne URBAH LAL, Univ Paris-Sud, IN2P3/CNRS, Orsay,
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Role, Objectives and Migration Plans to the European Middleware Initiative (EMI) Morris Riedel Jülich Supercomputing.
EMI INFSO-RI Overview of the EMI development objectives Balázs Kónya (Lund University) EMI Technical Director EMI All Hands Prague, 23rd November.
KnowARC objectives & challenges Balázs Kónya/Lund University Oslo, 1 st KnowARC Conference.
Identity Management in DEISA/PRACE Vincent RIBAILLIER, Federated Identity Workshop, CERN, June 9 th, 2011.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
Easy Access to Grid infrastructures Dr. Harald Kornmayer (NEC Laboratories Europe) Dr. Mathias Stuempert (KIT-SCC, Karlsruhe) EGEE User Forum 2008 Clermont-Ferrand,
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Standards driven AAA for Job Management within the OMII-UK distribution Steven Newhouse Director, OMII-UK
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
The European KnowARC Project Péter Stefán, NIIF/HUNGARNET/KnowARC TNC2009, June 2009, Malaga.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Accessing the VI-SEEM infrastructure
Aleksandr Konstantinov, Weizhong Qiang NorduGrid collaboration
EMI Interoperability Activities
Interoperability & Standards
Elisa Ingrà – Consortium GARR
Grid Engine Riccardo Rotondo
Grid Engine Diego Scardaci (INFN – Catania)
gLite The EGEE Middleware Distribution
Presentation transcript:

NG07 summary: Grid state of art, solution, infrastructure, and KnowARC topics Weizhong Qiang November 2, 2007

Outline  Grid State of Art  Some grid projects and their strategy  Grid solutions  Grid middleware, grid application, grid interoperability  Most of them are ARC related  Grid infrastructures  Know ARC task/WP meetings, and KnowARC discussion in NorduGrid technical meeting

gLite: gLiTe strategy towards standards  How to achieve interoperability in absence of adequate standards:  Parallel Infrastructures  User driven: the user joins different grids (uses different tools)  Site driven: the site offers different access methods to resources  Gateways  Bridges between infrastructures  Adaptors and translators  Single API for users. Plug-ins provide translation

Interoperability models Claudio Grandi - NorduGrid Conference - Copenhagen - 24 September Adaptors and Translators API Plugin User driven parallel infr. Site driven parallel infr. Gateways Gateway

gLite: gLiTe strategy towards standards  What is really relevant for interoperation?  Interoperability needs to be provided for Foundation Grid Middleware:  Security: Authentication and Authorization  Information systems: Information Schema and Service Discovery  Data Management: Data Access and Data Transfer  Job Management: Job submission and monitoring  High Level Services may help building gateways, adapters and translators

gLite: gLiTe strategy towards standards  gLite needs to interoperate with other infrastructures  Use of standards is the way to go but in the meantime need pragmatic approaches for interoperability  Focus is on the Grid Foundation middleware  Security  Certificates for AuthN and VOMS for AuthZ  Shibboleth SLCS for short-live certificate  Information systems  GLUE schema (1.3 now 2.0 in future) and adapters to interoperate  Data Management  SRM 2.2 interface for data access  GridFTP (de-facto standard) for file transfers  Job Management  CE (Computing Element)  BES (Basic Execution Service) interface in future (CREAM).  Legacy pre-WS GRAM deployed in parallel.  gLite WMS, Condor-G and BLAH to build gateways.  OGF-UR will be used for accounting

The Contribution of OMII-Europe towards Standard-based Grid Middleware  Focus  Achieving interoperability through common standards  Common standards is the long term solution  Significant involvement and success in OGF and Oasis  Implementations of standards in tandem with standards development on all middleware platforms

Approaches to Interoperability  Adapters-based :  The ability of Grid middlewares to interact via adapters that translate the specific design aspects from one domain to another  Standard-based :  the native ability of Grid middleware to interact directly via well-defined interfaces and common open standards

What OMII-Europe is Doing?  Initial focus on providing common interfaces and integration of major Grid software infrastructures  Common interoperable services:  Database Access (WS-DAI, WS-DAIX, WS-DAIR (OGSA- DAI))  Virtual Organization Management (SAML)  Accounting (Usage Record (UR), Resource Usage Service (RUS))  Job Submission and Job Monitoring (JSDL, BES)  Infrastructure integration  Initial gLite/UNICORE/Globus interoperability  Interoperable security framework  Access these infrastructure services through a portal

UNICORE6  Long history middleware, with friendly user Interface, many application support  Start to support some standard Web service specifications  Service Container  WSRF 1.2, WS ServiceGroup, WS BaseNotification, WS-I

UNICORE6  Basic Service  UNICORE Atomic Services  Target system creation  Job submission and job management  File system /Storage access  File import/export control  Registry  Publish services (address, service description)  Shareable between sites  Single Point of entry for clients

UNICORE6  Security  Authentication: X.509 mutual authentication  Transport level communication: SSL  Message level security information: SAML Assertion  Message level communication: digital signed  Policy decision in the authorization process: XACML1.0  Trust Delegation: SAML trust delegation token

UNICORE6  File transfer  Simple OGSA ByteIO  Client  GPE for UNICORE

XtreemOS: a Grid Operating System providing a native Support to Virtual Organizations  Design, implementation, evaluation and distribution of an open source Grid operating system  With native support for virtual organizations (VO)  And capable of running on a wide range of underlying platforms, from clusters to mobiles.

XtreemOS  XtreemOS:  A new Grid OS, based on the existing general purpose OS Linux  A set of system services (extending those found in the traditional Linux) will provide users with all the Grid capabilities associated with current Grid middleware, but fully integrated into the OS  The underlying Linux will be extended as needed to support VOs spanning across many machines and to provide appropriate interfaces to the Grid OS services

MIG : Minimum intrusion Grid  Criticize and rethink the grid solution  Problems found in the existing models? (not every problem in every middleware)  Single point of failure  Lack of scheduling  Poor scalability  No means of implementing privacy  No means of utilizing ‘cycle-scavenging’  Firewall dependency  Highly bloated middleware  Many programming languages – few well chosen  No economy

MIG  MiG Rules  Nothing produced by MiG can be required to be installed on either the resource or client end  Everything within MiG must be implemented in Python unless another language is absolutely required  Any design and implementation decision must optimize towards transparency for the users  Anything that is not right must be thrown away

Open Science Grid  Concentrate on resource integration, middleware deployment, application support  OSG Goals – Use of Existing Resources  Enable scientists to use and share a greater % of available compute cycles.  Help scientists to use distributed systems, storage, processors, and software with less effort.  Enable more sharing and reuse of software and reduce duplication of effort through providing effort in integration and extensions.  Establish “open-source” community working together to communicate knowledge and experience.

Open Science Grid  The OSG Facility does :  Help sites join the OSG facility and enable effective guaranteed and opportunistic usage of their resources (including data) by remote users  Help VOs join the OSG facility and enable effective guaranteed and opportunistic harnessing of remote resources (including data)  Define interfaces which people can use.  Maintain and supports an integrated software stack that meets the needs of the stakeholders of the OSG consortium  Reach out to non-HEP communities to help them use the OSG  Train new users, administrators, and software developers

IBM and Grid computing now and in the future  Span a lot of area, difficult to summarize:  Grid and virtualization  Grid and Green Computing  Grid and data: Challenges at the example of Healthcare  provider industry  Germany D-Grid  "Staged HPC with Grids?"

Outline  Grid State of Art  Some grid projects and their strategy  Grid solutions  Grid application level middleware, grid application, grid interoperability  Most of them are ARC related  Grid infrastructures  Know ARC task/WP meetings, and KnowARC discussion in NorduGrid technical meeting

 Grid interoperability  Glite/ARC interoperability (Gateway method)  Interoperability between EGEE, NorduGrid and OSG via Cronus (Virtual Batch System across the grid federations, Adaptor method?)  Application level middleware  LUNARC Application Portal  Extending ARC to enable bioinformatics applications (Dynamic runtime environment (RTE) management framework, dynamic installation. Janitor Catalog)  Using Taverna as a dynamic RTE within ARC (Open source software tool for designing and executing workflows, used as one use case of Janitor?)

 Grid application  Usage of LUNARC portal in bioinformatics (QTL- Analysis)  Applications in bioinformatics  Computational Biology Enabled by Grid Computing: Application Examples and Lessons Learned from the SwissBioGrid Initiative  Bioinformatic applications at NDGF: status and plans  Grid-based medical image management  ARC and High Energy Physics  Experience with ARC usage in computing and education in St.Petersburg

 We had a specific session about KnowARC in “Grid solution” part  The HED framework of the new ARC  Core services of the new ARC  Next generation back-ends  Security framework of new ARC  Storage System design by KnowARC  Propose a new DHT based storage solution

Outline  Grid State of Art  Some grid projects and their strategy  Grid solutions  Grid application level middleware, grid application, grid interoperability  Most of them are ARC related  Grid infrastructures  Know ARC task/WP meetings, and KnowARC discussion in NorduGrid technical meeting

 M-Grid (The Finnish Material Sciences Grid) (ARC)  GRID implementation at SiGNET - NDGF-T2 (a testbed, ARC, gLite)  LitGrid - grid infrastructure for Lithuania (gLite)  Baltic Grid (base on gLite, with some extension, e.g. Tycoon for distributed resource allocation)  D-Grid, middlewares and applications (based on Globus, gLite, UNICORE; with a few sub-projects)  The Norwegian infrastructure for computational science: HPC, storage and grid

Outline  Grid State of Art  Some grid projects and their strategy  Grid solutions  Grid middleware, grid application, grid interoperability  Most of them are ARC related  Grid infrastructures  Know ARC task/WP meetings, and KnowARC discussion in NorduGrid technical meeting

 KnowARC  supposed to provide a new generation grid middleware based on ARC  to promote standardization and interoperability  to support applications in new area

 What we have now:  HED (Web service container, provide SOAP parsing, TLS/SSL, API for Web Service development)  Core Services:  A-REX (Execution service, Web service on top of Arc0 grid- manager, BES compatible)  Echo (test service)  Basic information service (Provide Local Information Description Interface)  Policy engine (support XML policy; will be extended to XACML standard)  Data Storage service (based on functionality of Arc0)  Dynamic Runtime Environment, Janitor

KnowARC task/WP meetings  Presentation and discussion about:  KnowARC/ARC1 middleware:  Hosting Environment Daemon(HED)  Information Indexing Service design (will base on P2P technology and use JXTA)  Instant CA service (try to provide convenient certificate for users )  Applications:  Flowgrid and ARC  Medical image management & processing with ARC (based on ARC0)

KnowARC discussion in NorduGrid technical meeting  The agenda is rescheduled and confusing,  Presentation and tough discussion  Glue 2 (new standard Information schema)  Next generation ARCLIB  Security framework of new ARC (discussion on the policy engine)  Storage system design by KnowARC (discussion on the proposed design)  Integrated stagein/out capability of ARC Grid Manager (explanation about the grid manager)  Job priorities

 Sandboxing & virtualization (provide virtual- machine based solution)  HED: performance & ws-compatibility (performance test about ARC0, ARC1)  Runtime Environments & Application Catalogues  Security/Taverna integration/Storage (parallel discussion)  ARC1 Infosys and HED backends