Hideko Mills, Manager of IT Research Infrastructure

Slides:



Advertisements
Similar presentations
White Paper Exercise Objective: Develop specific joint projects in areas of mutual interest and based on existing relationships, which could result in.
Advertisements

E-IRG Workshop CSC, October 4, 2006 Risto M. Nieminen Helsinki University of Technology HELSINKI UNIVERSITY OF TECHNOLOGY.
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor's Use of the Cisco Unified Computing System.
The Virtual Data Toolkit distributed by the Open Science Grid Richard Jones University of Connecticut CAT project meeting, June 24, 2008.
Energy Efficient Prefetching with Buffer Disks for Cluster File Systems 6/19/ Adam Manzanares and Xiao Qin Department of Computer Science and Software.
Dell IT Innovation Labs in the Cloud “The power to do more!” Andrew Underwood – Manager, HPC & Research Computing APJ Solutions Engineering Team.
Simo Niskala Teemu Pasanen
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
EU Macro-regions e.g. Baltic Sea Region Macro-regions e.g. Baltic Sea Region National Regional City-regions Cluster Policy in Europe Local.
Communicating with Users about HTCondor and High Throughput Computing Lauren Michael, Research Computing Facilitator HTCondor Week 2015.
An Introduction to the Open Science Data Cloud Heidi Alvarez Florida International University Robert L. Grossman University of Chicago Open Cloud Consortium.
Welcome to HTCondor Week #14 (year #29 for our project)
Miron Livny Computer Sciences Department University of Wisconsin-Madison Harnessing the Capacity of Computational.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Welcome to CW 2007!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
Opportunities for Discovery: Theory and Computation in Basic Energy Sciences Chemical Sciences, Geosciences and Biosciences Response to BESAC Subcommittee.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
and beyond Office of Vice President for Information Technology.
Information Technology at Purdue Presented by: Dr. Gerry McCartney Vice President and CIO, ITaP HPC User Forum September 8-10, 2008 Using SiCortex SC5832.
Grid Laboratory Of Wisconsin (GLOW) Sridhara Dasu, Dan Bradley, Steve Rader Department of Physics Miron Livny, Sean.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, minute turn at the gate.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Campus Cyberinfrastructure – Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure April 25, 2012.
EPSCoR Cyberinfrastructure Assessment Workshop North Dakota Jurisdictional Assessment October 15, 2007 Bonnie Neas VP for IT North Dakota State University.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
Grid Computing at The Hartford Condor Week 2008 Robert Nordlund
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
DV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science DOE: Scientific Collaborations at Extreme-Scales:
ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Campus networks preparing for BIGDATA Erik Deumens Research Computing UFIT 10/29/2012.
CCS Overview Rene Salmon Center for Computational Science.
Data Management Recommendation ISTeC Data Management Committee.
Slide 1 Campus Design – Successes and Challenges Michael Cato, Vassar College Mike James, Northwest Indian College Carrie Rampp, Franklin and Marshall.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
11/15/04PittGrid1 PittGrid: Campus-Wide Computing Environment Hassan Karimi School of Information Sciences Ralph Roskies Pittsburgh Supercomputing Center.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Jonathan Carroll-Nellenback.
National Energy Research Scientific Computing Center (NERSC) NERSC View of the Greenbook Bill Saphir Chief Architect NERSC Center Division, LBNL 6/23/2004.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Open Science Grid as XSEDE Service Provider Open Science Grid as XSEDE Service Provider December 4, 2011 Chander Sehgal OSG User Support.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Welcome!!! Condor Week 2006.
Douglas Thain, John Bent Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau, Miron Livny Computer Sciences Department, UW-Madison Gathering at the Well: Creating.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
北京大学计算中心 Computer Center of Peking University Building an IaaS System in Peking University Weijia Song April 23, 2012.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
A Brief Introduction to NERSC Resources and Allocations
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Condor A New PACI Partner Opportunity Miron Livny
Scaling Science Communities Lessons learned by and future plans of the Open Science Grid Frank Würthwein OSG Executive Director Professor of Physics UCSD/SDSC.
National e-Infrastructure Vision
Enable computational and experimental  scientists to do “more” computational chemistry by providing capability  computing resources and services at their.
Miron Livny John P. Morgridge Professor of Computer Science
Dean Martin Cadwallader Dean of the Graduate School
TeraScale Supernova Initiative
Energy-Efficient Storage Systems
Office of Information Technology February 16, 2016
Presentation transcript:

Hideko Mills, Manager of IT Research Infrastructure

To strengthen the campus research computing infrastructure to meet current and future needs.

Current Project The Euclid Cluster - Prof. Manos Mavrikakis’ research group Computational chemistry approaches to improve engineering practices in: Chemical processing Alternative energy Pollution prevention Key Parameters: 2184 compute-cores273 compute-servers 13 Terabytes Storage20 Teraflops peak compute capacity 10 GigE InterconnectIntel Nehalem CPU

The Euclid Cluster What can be done now that wasn’t available before? – Efficient scaling of jobs due to a low latency network – Total compute power of 20 Teraflops (more than anything before on campus) – Move large datasets and files at up to 10 Gbps of bandwidth (10 times more bandwidth than any existing interconnect)

The Euclid Cluster What specifically will it enable? – Tackle projects that are compute-intensive and outside the capabilities of existing UW infrastructure – Showcase state-of-the-art open source software and technologies to benefit the UW research community

The Euclid Cluster How will it help colleagues? A vailable to the broader UW community for specialized computing applications – Center High Throughput Computing (CHTC), led by Prof. Miron Livny (Computer Science), will provide access to the cluster under the aegis of the Condor project Key Scientific Codes to run on Euclid VASP (cms.mpi.univie.ac.at/vasp/)cms.mpi.univie.ac.at/vasp/ DACAPO (dcwww.camd.dtu.dk/campos/Dacapo/)dcwww.camd.dtu.dk/campos/Dacapo/ GPAW (

Campus Collaboration Funding – Department of Chemical Engineering, College of Engineering, Center High Throughput Computing (CHTC), WARF, and the Division of Information Technology Almost entirely open-source: no purchases Multi-vendor investment – Dell, Cisco, Chelsio, APC

Questions?