UVic Advanced Networking Day 28 November 2005 University of Victoria Research Computing Facility Colin Leavett-Brown.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

WestGrid Collaboration and Visualization Brian Corrie Collaboration and Visualization Coordinator WestGrid/SFU.
MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Rationale for GLIF November CA*net 4 Update >Network is now 3 x 10Gbps wavelengths – Cost of wavelengths dropping dramatically – 3 rd wavelength.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
1 Northwestern University Information Technology Data Center Elements Research and Administrative Computing Committee Presented October 8, 2007.
Top500: Red Storm An abstract. Matt Baumert 04/22/2008.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Particle Physics and the Grid Randall Sobie Institute of Particle Physics University of Victoria Motivation Computing challenge LHC Grid Canadian requirements.
Bridging the Gap Between the CIO and the Scientist Michael A. Pearce – Deputy CIO University of Southern California (USC) Workshop April, 2006.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
and beyond Office of Vice President for Information Technology.
Randall Sobie BCNET Annual Meeting April 24,2002 The Grid A new paradigm in computing Randall Sobie Institute of Particle Physics University of Victoria.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
November 2005 Advanced Research Networks Conference BCNET UVic is Wired Leverage the Power.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
CSG - Research Computing Redux John Holt, Alan Wolf University of Wisconsin - Madison.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
UNB ACRL: Current Infrastructure, Programs, and Plans Virendra Bhavsar Professor and Director, Advanced Computational Research Laboratory (ACRL) Faculty.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Ashok Agarwal University of Victoria 1 GridX1 : A Canadian Particle Physics Grid A. Agarwal, M. Ahmed, B.L. Caron, A. Dimopoulos, L.S. Groer, R. Haria,
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
A Brief Overview Andrew K. Bjerring President and CEO.
CHARUSAT CLOUD PROJECT. Phases 1.Hardware Commissioning 2.Implementing Cluster 3.Implementing VMware 4.Migration of campus servers to cloud…..
CCS Overview Rene Salmon Center for Computational Science.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
CDF computing in the GRID framework in Santander
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
Tony Doyle - University of Glasgow Introduction. Tony Doyle - University of Glasgow 6 November 2006ScotGrid Expression of Interest Universities of Aberdeen,
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
2009 Lynn Sutherland February 4, From advanced networks to economic development WURCNet – Western University Research Consortium.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
NIIF HPC services for research and education
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Joint Techs, Columbus, OH
Jeremy Maris Research Computing IT Services University of Sussex
Grid Canada Testbed using HEP applications
Computing activities at Victoria
Data Management Components for a Research Data Archive
Presentation transcript:

UVic Advanced Networking Day 28 November 2005 University of Victoria Research Computing Facility Colin Leavett-Brown

UVic Advanced Networking Day 28 November 2005 Introduction Established in 1999 Supports National Laboratories on Vancouver Island –NRC Herzberg Institute for Astrophysics –NRCan Pacific Forestry Centre –NRCan Pacific Geoscience Centre Broad range of research –Earth Science, Physical Sciences, Engineering –Social Science, Medical Sciences Users across Canada, US and Europe Centre of Grid Activity (Forestry and Grid Canada)

UVic Advanced Networking Day 28 November 2005 Minerva HPC (1999 CFI Award) 8 node 8 way IBM SP Upgraded to 8 node 16 way in th in the TOP 500 (highest ranking of any C3 facility?) Funded in part with an IBM Shared University Research (SUR) Award for $830,000 Primary users: cosmology, climate simulations, engineering, and geosciences Still operational and utilized today.

UVic Advanced Networking Day 28 November 2005 Storage and Cluster (2001 CFI Award) 4 year project; online in March 2003 Currently 80 TB disk and 440 TB tape capacity Primary users: Astronomy, Forestry, Particle Physics –Grid enabled for Forestry Data Grid IBM Beta Test Site for LTO2 (Jan- Apr 2003) Upgraded to LTO3 (March 2005)

UVic Advanced Networking Day 28 November 2005 Mercury & Mars Xeon Clusters Operational in March Dual Xeon blades (2.4 & 2.8 GHz) Top500 in 2003 Now have 154 Dual Xeon blades (including 3.2GHz) Primary users: particle physics, engineering, geoscience Back-filled with particle physics simulations –1 TB of data shipped to Stanford (SLAC) each week Grid-enabled and connected to Grid Canada –Particle physics and other simulation applications –Goal is to back-fill other sites for simulation production

UVic Advanced Networking Day 28 November 2005 TAPOR Project (2001 CFI Award) TAPor (Text-Analysis Portal) project will help Canadian researchers in online textual research Six leading humanities computing centres in Canada, including UVic, will contribute collections of rare tapes and documents. –ISE, Robert Graves Diary, Cowichan Dictionary – Database servers are integrated into the Storage Facility

UVic Advanced Networking Day 28 November 2005 Kikou Vector Computational Facility

UVic Advanced Networking Day 28 November 2005 Kikou Vector Computational Facility $2.42 M CFI Award (2004) for a Vector Computational Facility Purchased NEC-SX6 –$5.3M –Commissioned March 2005 –4 Nodes, 32 Processors, GFlops, 32/64GB, 10TB. –Primary use for Climate Modelling (A.Weaver)

UVic Advanced Networking Day 28 November 2005 Llaima Opteron Cluster 2004 CFI/BCKDF awards. 40 Node, Dual Opteron (2.0 Ghz), 4GB. 20/40 Commissioned September Will have direct SAN attachment. Primary use is for Astronomy (S. Ellison)

UVic Advanced Networking Day 28 November 2005 Networking Campus Networks are primarily optical networks with Cisco 6500 series routers. Connections to the access layer are generally twisted pair copper. Both a general use and a dedicated Research Network are provided (GE), interconnected via access layer. Research network used for I/O, node interconnect, and direct connections BCNet/CA*Net4.

UVic Advanced Networking Day 28 November 2005 Networking External connections: commodity network (100E), ORAN/CA*Net4 (optical & GE) through BCNet. CA*Net4/ to other national networks –Abilene, NLR, ESNET (US), GEANT (EUR), JANET (UK), JAXA (JPN), KREONET (KOR), AARNET (AUS) Lightpaths near future (Jan 06) –~1GB going to ~10GB –Optical Switching: BCNet, Member Sites, TXs. –Applications: HEPNet/C$ Testbed, CCCma, Babar/SLAC, ATLAS/TRIUMF, WG2

UVic Advanced Networking Day 28 November 2005 Infrastructure Challenges March 2002, 43KW Research Load: –March 2000, Minerva, 6KW –March 2003, Storage, 30KW –March 2003, Mercury & Mars, 40KW –April 2004, TAPoR, 5kw –March 2005, Kikou, 45 KW –September 2005, Llaima, 20KW November 2005, 150KW

UVic Advanced Networking Day 28 November 2005 Infrastructure Challenges Ran out of space, power, & A/C. $700K renovation. Increased room by 32m ². Added 44 Tons of A/C. Added 225KW UPS. Still growing, time for another renovation.

UVic Advanced Networking Day 28 November 2005 Support Managed by the University of Victoria Computer Centre –Under the Research and Administrative Systems Support Group –3 FTE dedicated to the Research Computing –Augmented by approximately 9 other sysadmins, network and hardware technicians –24/7 almost. C3 TASP Support –Position was used to assist with parallel programming support very successfully. –Position is currently vacant.

UVic Advanced Networking Day 28 November 2005 User community Minerva (~150 user accounts) –NRC, Queens, UBC, SFU, NRCan, Alberta, Washington, Dalhousie Storage Facility –HIA and PFC use facility to distribute images to their user community –TAPOR will provide a database service for social science Cluster (~60 user accounts not including Grid Canada) –NRC (Ottawa), NRCan (Geoscience and Forestry) –Montreal, UBC, TRIUMF, Alberta, NRC –Germany, UK, US (SLAC)

UVic Advanced Networking Day 28 November 2005 Plans for 2006 Storage –Currently at the beginning of 4th year –200 TB disk and 1000 TB Tape –Higher bandwidth, remove single points of failure. –Minimise ongoing maintenance costs. Mercury Cluster –50 blades per year Llaima –Commission remaining 20 nodes. –Upgrade to multi-core processors? Kikou –Another 8-way node?

UVic Advanced Networking Day 28 November 2005 Future plans Next CFI competition –Renewal of Minerva HPC –Continued support of Storage –Modest expansion of the Mercury Cluster –Upgrade of TAPoR –Part of WG2 Venus and Neptune Projects –$50 Million CFI projects to instrument the floor of the Pacific shelf –Venus (inland strait) will take data in 2005 –Neptune begins data taking in

UVic Advanced Networking Day 28 November 2005 Summary UVic RCF supports a broad range of users –A large fraction of the users are not part of the C3 community Wide range of computational and storage requirements –HPC, Cluster, Storage, Database, Grid Very successful –Operational very quickly and efficiently –2 machines in the TOP500 –Large storage repository –Willingness to experiment: Beta test site, 2 Grid modes