PIONIER - Polish Optical Internet: The eScience Enabler Jarek Nabrzyski Poznan Supercomputing and Networking Center Digital Divide.

Slides:



Advertisements
Similar presentations
PIONIER and its usability for GEANT extension to Eastern Europe Michał Przybylski, CEF Networks Workshop, May 2005.
Advertisements

POZNAN SUPERCOMPUTING AND NETWORKING CENTER Poznan Supercomputing and Networking Center Portals and Content Cezary Mazurek, Andrzej.
THE ICT RESEARCH INFRASTRUCTURE DEVELOPMENT PROGRAMME Grzegorz Żbikowski Department of Information Systems for Science Ministry of Science and.
PIONIER 2003, Poznan, , PROGRESS Grid Access Environment for SUN Computing Cluster Poznań Supercomputing and Networking Center Cezary Mazurek.
Digital Object Lifecycle in dLibra Digital Library Framework Cezary Mazurek, Marcin Werla
1 Project overview Presented at the Euforia KoM January, 2008 Marcin Płóciennik, PSNC, Poland.
Zagreb, NATO ANW: The Third CEENet Workshop on Network Management, Piotr Sąsiedzki POL-34 Silesian University of Technology Computer Centre.
Global Grid Access Cezary Mazurek, PSNC. Cezary Mazurek, PSNC, Enable access to global grid, Supercomputing 2003, Phoenix, AZ 2 Agenda Introduction PROGRESS.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GridLab Enabling Applications on the Grid Jarek Nabrzyski et al. Poznań Supercomputing and Networking.
The CrossGrid project Juha Alatalo Timo Koivusalo.
5 Nov 2001CGW'01 CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001.
Portals Team GridSphere and the GridLab Project Jason Novotny Michael Russell Oliver Wehrens Albert.
SC 2003 Demo, NCSA booth GridLab Project Funded by the EU (5+ M€), January 2002 – December 2004 Application and Testbed oriented Cactus Code, Triana Workflow,
Polish Tier-2 Andrzej Olszewski Institute of Nuclear Physics Kraków, Poland October 2005 – February 2006.
Polish Contribution to the Worldwide LHC Computing Grid WLCG M. Witek On behalf of the team of Polish distributed Tier-2 Outline Introduction History and.
“Science and technology potential in Poland” - Dr Olaf Gajl, Information Processing Centre OPI Warsaw, Pl International Conference “Scientific and Technological.
A.V. Bogdanov Private cloud vs personal supercomputer.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Connect. Communicate. Collaborate Eastern GÉANT2 Extension Porta Optica Study Regional Workshop Kiev H. Döbbeling DANTE.
General Intro to GridLab Jarek Nabrzyski et al. Poznań Supercomputing and Networking Center.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
. Bartosz Lewandowski. Center of e-Infrastructure National Research and Education Network PIONIER National Research and Education Network PIONIER Research.
Poznań city, PSNC - who are we? Poznań city, PSNC - who are we? Introduction into Virtual Laboratory Introduction into Virtual Laboratory VLab architecture.
Cracow Grid Workshop, October 15-17, 2007 Polish Grid Polish Grid: National Grid Initiative in Poland Jacek Kitowski Institute of Computer Science AGH-UST.
Lit fibre service Lada Altmannová CESNET Praha, September 19th, 2007.
GridLab A Grid Application Toolkit and Testbed IST Jarek Nabrzyski GridLab Project Coordinator Poznań.
Integration of the Biological Databases into Grid-Portal Environments Michal Kosiedowski, Michal Malecki, Cezary Mazurek, Pawel Spychala, Marcin Wolski.
Porta Optica Feasibility Study of OPTICAL GATEWAY to Eastern Europe POS Consortium Coordinator: Artur Binczewski (PSNC) presented by: Jacek Gajewski (CEENet)
PROGRESS – Computing Portal and Data Management in the Cluster of SUNs Michał Kosiedowski Sun HPC Consortium Heidelberg 2003.
From GEANT to Grid empowered Research Infrastructures ANTONELLA KARLSON DG INFSO Research Infrastructures Grids Information Day 25 March 2003 From GEANT.
PIONIER – fiber based NREN from an idea to the fact Artur Binczewski, Michał Przybylski, Maciej Stroiński artur | michalp | TERENA.
Jarek Nabrzyski, Ariel Oleksiak Comparison of Grid Middleware in European Grid Projects Jarek Nabrzyski, Ariel Oleksiak Poznań Supercomputing and Networking.
GridLab: A Grid Application Toolkit and Testbed Jarosław Nabrzyski GridLab Project Manager Poznań Supercomputing and Networking Center, Poland
GridLab: A Grid Application Toolkit and Testbed
GLOBAL GRID FORUM 10 Workflows in PROGRESS and GridLab environments Michał Kosiedowski.
Facilitating access to the scientific data service with the use of the Data Management System Cezary Mazurek
TESTBED FOR FUTURE INTERNET SERVICES TEFIS at the EU-Canada Future Internet Workshop, March Annika Sällström – Botnia Living Lab at Centre for.
The PROGRESS Grid Service Provider Maciej Bogdański Portals & Portlets 2003 Edinburgh, July 14th-17th.
Clusterix:National IPv6 Computing Facility in Poland Artur Binczewski Radosław Krzywania Maciej Stroiński
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
PROGRESS: ICCS'2003 GRID SERVICE PROVIDER: How to improve flexibility of grid user interfaces? Michał Kosiedowski.
General Intro to GridLab Jarek Nabrzyski et al. Poznań Supercomputing and Networking Center.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE – paving the way for a sustainable infrastructure.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
POLISH OPTICAL INTERNET Cross Border Fiber - towards the revolution in NREN international connectivity Artur Binczewski (Poznan Supercomputing and Networking.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
T E R E N A Z a g r e b, M a y 1 9 – 2 2 Next Generation Network - - a PIONIER example Maciej Stroiński, Artur Binczewski, Michał Przybylski Poznań.
1 BRUSSELS - 14 July 2003 Full Security Support in a heterogeneous mobile GRID testbed for wireless extensions to the.
Grid Computing at PSNC Jarosław Nabrzyski Poznań Supercomputing and Networking Center (PSNC) and Information Sciences Institute, Poznan University of Technology.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Terena conference, June 2004, Rhodes, Greece Norbert Meyer The effective integration of scientific instruments in the Grid.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
GridLab Resource Management System (GRMS) Jarek Nabrzyski GridLab Project Coordinator Poznań Supercomputing and.
7. Grid Computing Systems and Resource Management
SUPERCOMPUTING 2002, Baltimore, , SUN „Grid Day” PROGRESS Access environment to computational services performed by cluster of SUNs Poznań Supercomputing.
EC Review – 01/03/2002 – WP9 – Earth Observation Applications – n° 1 WP9 Earth Observation Applications 1st Annual Review Report to the EU ESA, KNMI, IPSL,
GridLab Resource Management System (GRMS) Jarek Nabrzyski GridLab Project Coordinator Poznań Supercomputing and.
Meeting with Sun Microsystems at PSNC: Exploitation, 13 May 2004 GridLab 2003/4 „Steady leadership in changing times!” Jarek Nabrzyski Project Coordinator.
CERN The GridSTART EU accompany measure Fabrizio Gagliardi CERN
PEPC 2003, Geneva, , PROGRESS Computing Portal Poznań Supercomputing and Networking Center (PSNC) Poland Poland Cezary Mazurek.
PROGRESS: GEW'2003 Using Resources of Multiple Grids with the Grid Service Provider Michał Kosiedowski.
Polish Information and Foreign Investment Agency PL Warsaw, 12 Bagatela Street, phone: (+48 22) , fax: (+48 22) ;
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
Bob Jones EGEE Technical Director
Clouds , Grids and Clusters
Future EU Grid Projects
Presentation transcript:

PIONIER - Polish Optical Internet: The eScience Enabler Jarek Nabrzyski Poznan Supercomputing and Networking Center Digital Divide and HEPGRID Workshop

Poland Population: 38 mln Area: km 2 Since part of EU Temperature today: 0 0 C

R&D Center PSNC was established in 1993 and is an R&D Center in: –New Generation Networks POZMAN and PIONIER networks 6-NET, SEQUIN, ATRIUM projects –HPC and Grids GRIDLAB, CROSSGRID, VLAB, PROGRESS, CLUSTERIX projects –Portals and Content Management Tools Polish Educational Portal Multimedia City Guide, Digital Library Framework, Interactive TV

PIONIER - an idea of „All Optical Network”, facts: 4Q1999 – proposal of program submited to KBN 2Q2000 – PIONIER testbed (DWDM, TNC 2001) 3Q2000 – project accepted (tender for co-operation, negotiations with Telcos) 4Q2001 – I Phase: ~10 mln Euro –Contracts with Telbank and Szeptel (1434 km) 4Q2002 – II Phase: ~18.5 mln Euro –Contracts with Telbank, regional Power Grids Companies (1214 km) –Contract for equipment: 10GE&DWDM and IP router 2H2003 – installation of 10GE with DWDM rep./amp. –16 MANs connected and 2648 km of fibers installed –Contracts with partners (Telbank and HAVE) (1426 km): I phase ~ 5 mln Euro 2004/2005 – 21 MANs connected with 5200 km of fiber

PIONIER - fibers deployment, 1Q2004. Installed fiber PIONIER nodes Fibers started in 2003 Fibers planned in 2004/2005 PIONIER nodes planned in 2004/2005

How we build fibers Co-investment with telco operators or self-investment (with right of way: power distribution, railways and public roads) Average of 16 fibers available (4xG.652 for national backbone, 8xG.652 for regional use, 4xG.655 for long haul transmission) ( ) 2 pipes and one cable with 24 fibers available (2003) Average span length 60km for national backbone (regeneration possible) Local loop contruction is sometimes difficult (urban area - average 6 months waiting time for permissions) Found on time...

Link optimization a side effect of an urgent demand for DCM module ;-) replacement of G.652 fiber with G.655 fiber (NZDS) similar cost of fiber G.652 and G.655 cost reduction via: lower # of amp./regenerators, lower # of DCM But: optimisation is valid for selected cases only and is wavelength/waveset/link dependent.

Cost approximately 140KEuro Cost approximately 90KEuro, cost savings (equipment only): 35%

Community demands as a driving force Academic Internet –international connections: GEANT 10 Gb/s TELIA 2 Gb/s GTS/SPRINT 2 Gb/s –national connections between MANs (10Gb/s, 622Gb/s leased lambda) –near future – n x 10Gb/s High Performance Computing Centers (FC, GE, 10GE) –Project PROGRESS „Access environment to computational services performed by cluster of SUNs” SUN cluster (3 sites x 1Gb/s) result presented on SC 2002 and SC 2003 –Project SGI „HPC/HPV in Virtual Laboratory on SGI clusters” SGI cluster (6 sites x 1Gb/s) –Project CLUSTERIX (12 sites x 1 Gb/s) „National CLUSTER of LInuX Systems” –Project in preparation National Data Storage system (5 sites x 1Gb/s)

Community demands as a driving force Dedicated Capacity for European Projects –ATRIUM (622Mb/s) –6NET ( Mb/s) –VLBI (2x1Gb/s dedicated) –CERN-ATLAS (>1 Gb/s dedicated per site) –near future – 6 FP IST

GDAŃSK POZNAŃ WROCŁAW ZIELONA GÓRA ŁÓDŹ KATOWICE KRAKÓW LUBLIN WARSZAWA BYDGOSZCZ TORUŃ CZĘSTOCHOWA BIAŁYSTOK OLSZTYN KIELCE PUŁAWY RZESZÓW OPOLE BIELSKO-BIAŁA RADOM GÉANT 10 Gb/s Metropolitan Area Networks 622 Mb/s 155 Mb/s 10 Gb/s OWN FIBERS GÉANT LEASED CHANNELS Intermediate stage - 10GE over fiber KOSZALIN SZCZECIN

PIONIER - the economy behind Cost reduction via: simplified network architecture IP / ATM / SDH / DWDM  IP / GE / DWDM lower investment, lower depreciation ATM /SDH  GE simplified management

PIONIER - the economy behind... Cost relation (connections between 21 MANs, per year): 622Mb/s channels from telco (real cost): 4.8 MEuro 2.5Gb/s channels from telco (estimate): 9.6 MEuro 10Gb/s channels from telco (estimate) : 19.2 MEuro PIONIER costs (5200km of fibers, 10GE): 55.0 MEuro Annual PIONIER maintenance costs: 2.1 MEuro Return of Investment in 3 years! (calculations made only for 1 lambda used)

PIONIER – e-Region Two e-Regions already defined: Cottbus – Zielona Gora (D-PL) Ostrava – Bielsko Biala (CZ-PL) e-Region objectives: 1.Creation of a rational base and possibility of integrated work between institutions across the border, as defined by e- Europe. (...) education, medicine, natural disasters, information bases, protection of environment. 2.Enchancing the abilities of co-operation by developing new generation of services and applications. 3.Promoting the region in the Europe (as a micro scale of e- Europe concept)

PIONIER – „Porta Optica” „ PORTA OPTICA” - a distributed optical gateway to eastern neigbours of Poland (project proposal) A chance for close cooperation in scientific projects, by the means of providing multichannel/multilambda Internet connections to the neighbouring countries. An easy way to extend GEANT to Eastern European countries

PIONIER – cooperation with neighbours GERMANY CZECH REP. SLOVAKIA UKRAINE BELARUS LITHUANIA RUSSIA e-Region PORTA OPTICAPORTA OPTICA

PROGRESS (3) HPC and IST project VLBI ATLAS OTHER PROJECTS? HPC network (5+3)

CLUSTERIX - National CLUSTER of LInuX Systems launched in November 2003, 30 months its realization is divided into two stages: - research and development – first 18 months - deployment – starting after the r&d stage and lasting 12 months in more than 50 % funded by the consortium members consortium: 12 universities and Polish Academy of Sciences

CLUSTERIX goals to develop mechanisms and tools that allow the deployment of a production Grid environment with the basic infrastructure comprising local PC- clusters based on 64-bit Linux machines located in geographically distant independent centers connected by the fast backbone network provided by the Polish Optical Network PIONIER existing PC-clusters, as well as new clusters with both 32- and 64-bit architecture, will be dynamically connected to the basic infrastructure as a result, a distributed PC-cluster of a new generation with a dynamically changing size, fully operational and integrated with the existing services offered by other projects related to the PIONIER program, will be obtained results in the software infrastructure area will allow for increasing the portability and stability of the software and performance of the services and computations in the Grid-type structure

CLUSTERIX - objectives development of software capable of managing clusters with dynamically changing configuration, i.e. changing number of nodes, users and available services; one of the most important factors is reducing the management overhead; new quality of services and applications based on the IPv6 protocols; production Grid infrastructure available for the Polish research community; integration and making use of the existing services delivered as the outcome of other projects (data warehouse, remote visualization, computational resources of KKO); taking into consideration local policies of infrastructure administration and management, within independent domains; integrated end-user/administrator interface; providing required security in a heterogeneous distributed system.

CLUSTERIX: Pilot installation

Architecture

CLUSTERIX: Technologies the software developed will be based on the Globus Toolkit v.3, using the OGSA (Open Grid Services Architecture) concept - this technology ensures software compatibility with other environments used for creating Grid systems, and makes the created services easier to reuse - accepting OGSA as a standard will allow for co-operation of the services with other meta-clusters and Grid systems Open Source technology - allows anybody to access the project source code, modify it and publish the changes - makes the software more reliable and secure - open software is easier to integrate with the existing solutions and helps other technologies using Open Source software to develop Integration with existing software will be used extensively, e.g., GridLab broker, Virtual Users Account System

CLUSTERIX: R&D architecture design accordingly to specific requirements of users data management procedures of attaching a local PC cluster of any architecture design and implementation of the task/resource management system users account and virtual organization management security mechanisms in a PC cluster network resources management utilization of the IPv6 protocol family monitoring of cluster nodes and distributed applications design of a user/administrator interface design of tools for an automated installation/reconfiguration of all nodes within the entire cluster dynamic load balancing and checkpointing mechanism end-user’s applications

High Performance Computing and Visualisation with the SGI Grid for Virtual Laboratory Applications Project No. 6 T C/05836

Project duration: R&D – December November 2004 Deployment – 1 year Partners: HPC centers ACK CYFRONET AGH (Kraków) PSNC (Poznan) TASK (Gdansk) WCSS (Wroclaw) University Lodz End User IMWM (Warsaw) Institute of Bioorganic Chemistry PAS Industry SGI, ATM S.A. Funds: KBN, SGI PSNC TASK IMWM WCSS PŁ CYFRONET

Structure

Added value Real remote access to the national cluster (... GRID): Real remote access to the national cluster (... GRID): ASPs ASPs HPC/HPV HPC/HPV Labour instruments Labour instruments Better usage of licences Better usage of licences Dedicated Application Servers Dedicated Application Servers Better usage of HPC resources Better usage of HPC resources HTC HTC Emergency Computing Site Emergency Computing Site IMWM IMWM Production Grid environment Production Grid environment Midleware we will work out Midleware we will work out

virtual, remote (?) VERY limited access Main reason - COSTS Main GOAL - to make accessible on a common way Added Value Virtual Laboratory

Remote usage of expensive and unique facilities Better utilisation Joint venture and on-line co-operation of scientific teams Shorter deadlines, faster work eScience – closer Equal chances Tele –work, -science The Goal

Pilot installation of NMR Spectroscopy Optical network HPC, HPV systems Data Mining... more than remote access Testbed infrastructure

Building a national wide HPC/HPV infrastructure: Connecting the existing infrastructure with the new testbed. Dedicated Application Servers Resource Management Data access optimisation tape subsystems Access to scientific libraries Checkpoint restart kernel level IA64 architecture Advanced visualization Distributed Remote visualization Programming environment supporting the end user How to simplify the process of making parallel applications Remaining R&D activities

PROGRESS (1) Duration: December 2001 – May 2003 (R&D) Budget: ~4,0 MEuro Project Partners –SUN Microsystems Poland –PSNC IBCh Poznań –Cyfronet AMM, Kraków –Technical University Łódź Co-funded by The State Committee for Scientific Research (KBN) and SUN Microsystems Poland

PROGRESS (2) Deployment: June 2003 –.... –Grid constructors –Computational applications developers –Computing portals operators Enabling access to global grid through deployment of PROGRESS open source packages

PROGRESS (3) Cluster of 80 processors Networked Storage of 1,3 TB Software: ORACLE, HPC Cluster Tools, Sun ONE, Sun Grid Engine, Globus Wrocław Gdańsk

PROGRESS GPE

EU Projects: Progress and GridLab

What is CrossGrid ? 5. FP, founded by the EU –Time frame: March 2002 – February 2005 Structure of the project –WP1 - CrossGrid Applications Development –WP2 - Grid Application Programming Environment –WP3 - New Grid Services and Tools –WP4 - International Testbed Organisation –WP5 - Project Management (including Architecture Team and central Dissemination/ Exploitation)

Partners 21 partners 2 industry partners 11 countries The biggest testbed in Europe

Project structure –WP1 - CrossGrid Applications Development –WP2 - Grid Application Programming Environment –WP3 - New Grid Services and Tools –WP4 - International Testbed Organisation –WP5 - Project Management

GRIDMIDDLEWAREGRIDMIDDLEWARE Visualization Supercomputer, PC-Cluster Data-storage, Sensors, Experiments Internet, networks Desktop Mobile Access Hoffmann, Reinefeld, Putzer Middleware

Surgery planning & visualisation Flooding control MIS HEP data analysis Weather & pollution modelling level 1 - special hardware 40 MHz (40 TB/sec) level 2 - embedded processors level 3 - PCs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz (100 MB/sec) data recording & offline analysis Applications

Migrating Desktop What’s the best way to ‘travel’ ? Roaming Access Grids Microsoft Windows Linux take it anywhere access anyhow

GridLab Enabling Applications on the Grid Jarek Nabrzyski, Project Coordinator Poznan Supercomputing and Networking Center

GridLab Project Funded by the EU (5+ M€), January 2002 – December 2004 Application and Testbed oriented –Cactus Code, Triana Workflow, all the other applications that want to be Grid-enabled Main goal: to develop a Grid Application Toolkit (GAT) and set of grid services and tools...: –resource management (GRMS), –data management, –monitoring, –adaptive components, –mobile user support, –security services, –portals,... and test them on a real testbed with real applications

GridLab Members nPSNC (Poznan) - coordination nAEI (Potsdam) nZIB (Berlin) nUniv. of Lecce nCardiff University nVrije Univ. (Amsterdam) nSZTAKI (Budapest) nMasaryk Univ. (Brno) nNTUA (Athens) Sun Microsystems HP nANL (Chicago, I. Foster) nISI (LA, C.Kesselman) nUoWisconsin (M. Livny) collaborating with: –Users! EU Astrophysics Network, DFN TiKSL/GriKSL NSF ASC Project –other Grid projects Globus, Condor, GrADS, PROGRESS, GriPhyn/iVDGL, CrossGrid and all the other European Grid Projects (GRIDSTART) GWEN, HPC-Europa

GridLab Aims Get Computational Scientists using the “Grid” and Grid services for real, everyday, production work (AEI Relativists, EU Network, Grav Wave Data Analysis, Cactus User Community), all the other potential grid apps Make it easier for applications to make flexible, efficient, robust, use of the resources available to their virtual organizations Dream up, prototype, and test new application scenarios which make adaptive, dynamic, wild, and futuristic uses of resources.

What GridLab isn’t We are not developing low level Grid infrastructure, We do not want to repeat work which has already been done (want to incorporate and assimilate it …) –Globus APIs, –OGSA, –ASC Portal (GridSphere/Orbiter), –GPDK, –GridPort, –DataGrid, –GriPhyn, –...

…need to make it easier to use GAT Application “Is there a better resource I could be using?” GAT_FindResource( ) The Grid

GridLab Architecture

The Same Application … Application GAT Application GAT Application GAT Laptop The Grid Super Computer No network! Firewall issues!

More info / summary Bring your application and test it with the GAT and our services.