Download presentation
Presentation is loading. Please wait.
Published byArlene Greene Modified over 7 years ago
1
Piotr Bała, Marcin Radecki, Krzysztof Benedyczak
UNICORE in PL-Grid Piotr Bała, Marcin Radecki, Krzysztof Benedyczak ICM, University of Warsaw ACK Cyfronet 1
2
PL-Grid Consortium Consortium Creation – agreement signed January 2007. Consortium members – make up of 5 Polish supercomputing and networking centres: Academic Computer Centre CYFRONET AGH, Krakow (coordinator) Interdisciplinary Centre for Mathematical and Computational Modelling, Warsaw University Poznan Supercomputing and Networking Centre Academic Computer Centre, Gdansk Wroclaw Centre for Networking and Supercomputing PL-Grid Project proposal got funded by the European Regional Development Fund as part of the Innovative Economy Program on March 2, 2009. Result: First working NGI in Europe in the framework of EGI.eu 2 2
3
Status of Hardware Infrastructure
Current status (October 2010): Computational power CYFRONET: 29,3 Tflops ICM: ,1 Tflops PCSS: ,2 Tflops Disk storage CYFRONET: 313 TBytes ICM: TBytes PCSS: TBytes Total: 65,6 Tflops Total: 1046 TBytes Plans until the end of 2010: 185 Tflops 1900 TBytes Plans until the end of the Project (end of 2011): 215 Tflops 2500 TBytes 3 3
4
PL-Grid Building Blocks
PL-Grid offers access to the same resources be the means of: the gLite Grid middleware, the UNICORE Grid middleware, the QosCosGrid middleware, specialized client side applications like VineToolkit or MigratingDesktop. Different middlewares enable access to the same computing hardware. Storage resources are partitioned between middlewares. User management is certalized, it is replicated to: local LDAP instances, UVOS, VOMS, gridmapfiles. Three Grid structures are maintained: production research development / testing PL-Grid central user registration system (portal) UVOS VO Service VOMS VO Service Gridmap file (used by QCG) 4 4
5
UNICORE production infrastructure in PL-Grid
ICM Cyfronet WCSS The next two centers will be operational in the upcoming couple of weeks. Currently 14 scientific applications integrated with UNICORE. More available through scripts. Central Services (ICM) Gateway Registry Workflow Service Service orchestrator UVOS Execution services installed at each site: UNICORE/X TSI 5 5
6
UNICORE training and testing infrastructure
Training infrastructure Is a fully separated copy of the production infrastructure. Contains central services and two target systems. Example software: BLAST, CLUSTAL, PovRay, R. Access is granted semi-automatically. PnP CA integrated with a separate UVOS instance. Test infrastructure Used for internal testing of the project's software, related to UNICORE. Dedicated virtual machines. Integrated with the production UVOS, but uses a separate VO. 6 6
7
UNICORE services in GOCDB
Central Services unicore6.Registry unicore6.ServiceOrchestrator unicore6.WorkflowFactory unicore6.UVOSAssertionQueryService Site Services unicore6.Gateway unicore6.TargetSystemFactory unicore6.StorageManagement unicore6.StorageFactory Instances registered at ICM-PLGRID testing (uncertified) site 7 7
8
Monitoring of the UNICORE infrastructure
Nagios probes developed in PL-Grid are currently moved to EMI Deployed on production instance of Polish SAM/nagios Integrated in SAM Nagios official release In Update-14 – to be released soon automatic configuration by NCG based on GOCDB data When stable move GOCDB services to production site Need definition of site availability concerning UNICORE “CE” 8 8
9
Accounting A dedicated solution was developed
Integrated with the regional PL-Grid accounting infrastructure (BAT) UNICORE will publish to EGI repository independently It is stable, open and flexible. Might be integrated with the core UNICORE distribution(?) 9 9
10
Conclusions Successes:
Production deployment of UNICORE middleware together with other middlewares: working stable. A giant amount of improvements to the UNICORE middleware suggested and/or partially implemented; many bug reports and bug fixes submitted. Growing number of users who intensively use the infrastructure. A complete monitoring infrastructure for UNICORE developed from scratch, a basis of the current EGI/SAM UNICORE monitoring subsystem. A comprehensive, flexible, standards based and fault-tolerant accounting system. Failures and not yet solved issues: Production resources enabled long after the project start -> total number of users is low. Late integration with the EGI monitoring infrastructure and operations dashboard Lack of integration of the accounting with the the EGI infrastructure. The format! 10
11
11 11
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.