5 Nov 2001CGW'01 CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001.

Slides:



Advertisements
Similar presentations
PIONIER 2003, Poznan, , PROGRESS Grid Access Environment for SUN Computing Cluster Poznań Supercomputing and Networking Center Cezary Mazurek.
Advertisements

Dominik Stoklosa Poznan Supercomputing and Networking Center, Supercomputing Department EGEE 2007 Budapest, Hungary, October 1-5 Workflow management in.
Dominik Stokłosa Pozna ń Supercomputing and Networking Center, Supercomputing Department INGRID 2008 Lacco Ameno, Island of Ischia, ITALY, April 9-11 Workflow.
PIONIER/POZMAN NOC flash presentation TF-NOC meeting Ljubljana, Feb 2011 TF-NOC meeting Ljubljana, Feb 2011.
Hardware & the Machine room Week 5 – Lecture 1. What is behind the wall plug for your workstation? Today we will look at the platform on which our Information.
Introduction to Storage Area Network (SAN) Jie Feng Winter 2001.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Silicon Graphics, Inc. Cracow ‘03 Grid Workshop SAN over WAN - a new way of solving the GRID data access bottleneck Dr. Wolfgang Mertz Business Development.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
Polish Tier-2 Andrzej Olszewski Institute of Nuclear Physics Kraków, Poland October 2005 – February 2006.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
Peter Stefan, NIIF 29 June, 2007, Amsterdam, The Netherlands NIIF Storage Services Collaboration on Storage Services.
© 2009 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Cisco RV 120W Wireless-N VPN Firewall Take Basic Connectivity to a New.
Managing Storage Lesson 3.
Introduction to Computers Personal Computing 10. What is a computer? Electronic device Performs instructions in a program Performs four functions –Accepts.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
A Linux PC Farm for Physics Analysis at the ZEUS Experiment Marek Kowal, Krzysztof Wrona, Tobias Haas, Ingo Martens, Rainer Mankel DESY, Notkestrasse 85,
DMF Configuration for JCU HPC Dr. Wayne Mallett Systems Manager James Cook University.
Poznań city, PSNC - who are we? Poznań city, PSNC - who are we? Introduction into Virtual Laboratory Introduction into Virtual Laboratory VLab architecture.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
Storage Area Network Presented by Chaowalit Thinakornsutibootra Thanapat Kangkachit
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Clusterix:National IPv6 Computing Facility in Poland Artur Binczewski Radosław Krzywania Maciej Stroiński
Fermi National Accelerator Laboratory SC2006 Fermilab Data Movement & Storage Multi-Petabyte tertiary automated tape store for world- wide HEP and other.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
Polish Infrastructure for Supporting Computational Science in the European Research Space FiVO/QStorMan: toolkit for supporting data-oriented applications.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
POZNAŃ CITY BASED ON KNOWLEDGE CHALLENGES OF INFORMATION SOCIETY Tomasz J. Kayser Deputy Mayor of Poznań Vilnius, 24 November 2003.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
Data Communications and Networking CSCS 311 Lecture 4 Amjad Hussain Zahid.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
Local Area Networks School of Business Eastern Illinois University © Abdou Illia, Spring 2007 (Week 8, Tuesday 2/27/2007)
Patryk Lasoń, Marek Magryś
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
SUPERCOMPUTING 2002, Baltimore, , SUN „Grid Day” PROGRESS Access environment to computational services performed by cluster of SUNs Poznań Supercomputing.
1 UNIT 19 Data Security 2. Introduction 2 AGENDA Hardware and Software protect ion Network protect ion Some authentication technologies :smart card Storage.
1 UNIT 19 Data Security 2 Lecturer: Ghadah Aldehim.
PEPC 2003, Geneva, , PROGRESS Computing Portal Poznań Supercomputing and Networking Center (PSNC) Poland Poland Cezary Mazurek.
A Scalable Distributed Datastore for BioImaging R. Cai, J. Curnutt, E. Gomez, G. Kaymaz, T. Kleffel, K. Schubert, J. Tafas {jcurnutt, egomez, keith,
1.3 ON ENHANCING GridFTP AND GPFS PERFORMANCES A. Cavalli, C. Ciocca, L. dell’Agnello, T. Ferrari, D. Gregori, B. Martelli, A. Prosperini, P. Ricci, E.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
Compute and Storage For the Farm at Jlab
OpenLab Enterasys Meeting
UNIT 19 Data Security 2.
Chapter III, Desktop Imaging Systems and Issues: Lesson II Storing Image Data
Grid Canada Testbed using HEP applications
Comparison of LAN, MAN, WAN
Introduction to Networking & TCP/IP
Presentation transcript:

5 Nov 2001CGW'01 CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001

CGW'01 Summary What is ACC CYFRONET AGH? Present and future of ACC Why ACC Cyfronet for CrossGrid Testbed? First installations Next steps

5 Nov 2001CGW'01

5 Nov 2001CGW'01 What is ACC CYFRONET AGH? Established in 1973 as an independent, non-profit organization Since 1999 became a separate unit of the University of Mining and Metallurgy in Krakow About 60 employees in several divisions Main goals are to provide to universities and research institutes in Krakow: –computer power combined with wide range of software –communication infrastructure and network services

5 Nov 2001CGW'01 LAN in ACC CYFRONET MAN of Krakow

5 Nov 2001CGW'01 Present and future of ACC Cyfronet AGH Computers (parallel): SGI processors R10000 and R12000, 73.6 Gflops of peak performance, 40 GB of operating memory, 190 GB of disk storage HP S processors PA 8000, 11.5 Gflops of peak performance, 4 GB of operating memory, 45 GB od disk storage SPP1600/XA - 32 processors HP PA-RISC 7200, 7.68 Gflops of peak performance, 2.5 GB of operating memory, 40 GB of disk storage ???

5 Nov 2001CGW'01 SGI 2800

Data migration Hierarchical data storage system Disk cache Tape libraries (ATL) SCSI HP K400 Server, with UCFM 2.3 (UniTree) software Cyfronet LAN Local access via FTP and/or NFS Remote access via FTP Magneto- optical disks library(HP) SCSI

5 Nov 2001CGW'01 Hierarchical data storage system - features Automatic data migration to magneto-optical disks and tapes Each file stored in two copies trashcan - protection against accidental removal capacities: –disk cache: 110 GB –M-O library: 660 GB –tape libraries: 17 TB average file access time: –M-O library: 12 s + 30s/100MB –tape libraries: 3 min + 30s/100MB

5 Nov 2001CGW'01 Present and future of ACC Network: ACC Cyfronet is the leading unit for the Krakow’s MAN: –main network node in soutern Poland –designer and owner of the fiber-optic infrastructure within the city (links several dozen institutions; approximately 70 km long; ATM and/or FDDI standard) Provides access to interurban and international connections over three WANs (wide area networks): Ebone (3 Mb/s), POLPAK-T (5 Mb/s) and POL-34 (155 Mb/s) POL-34 (ATM standard) 155 Mb/s --> 622 Mb/s soon

5 Nov 2001CGW'01

5 Nov 2001CGW'01 Present and future of ACC Network: Future is PIONIER (Polish Optical Internet) - high-speed academic network in Poland with its own fiber-optic structure; should be ready in two years Supercomputing center PSNC in Poznan has already access to GEANT - Gigabit (2.4 Gb/s) Europeen Academic NeTwork

5 Nov 2001CGW'01 Why ACC Cyfronet for CrossGrid Testbed? Good infrastructure: computer and network resources High-speed connection with other testbed centers in Europe

5 Nov 2001CGW'01 First installations PC Linux cluster (8 processors + switch at the beginning) with a simple disk storage system Switch: at least 8 FastEthernet ports + 1 GigabitEthernet –e.g. BATM 48 FastEthernet + 2 GigabitEthernet, price: ~7500$ –e.g. BATM 24 FastEthernet + 1 GigabitEthernet, price: ~4000$ Software installation (Globus, middle-ware, etc.)

5 Nov 2001CGW'01 Further plans Real-time calculations for HEP experiments (future LHC accelerator in CERN) –data transfer speed of 1.5 Gb/s is needed. –GigabitEthernet port for our Cisco Catalyst 8540 (connection with POL-34 network); price: ~17000$; not necessary at the first stage Heterogeneous CrossGrid node: –including small number (e.g. 8) processors of SGI2800 by using miser or mpset programs offered by IRIX 6.5 –incorporating our hierarchical storage system UniTree + ATL