Institute for Digital Research and Education (IDRE) UCLA’s CI Vision Research CI DataNet Cyber Learning Institute for Digital Research and Education (IDRE)

Slides:



Advertisements
Similar presentations
What Did We Learn About Our Future? Getting Ready for Strategic Planning Spring 2012.
Advertisements

ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Background Chronopolis Goals Data Grid supporting a Long-term Preservation Service Data Migration Data Migration to next generation technologies Trust.
Research CU Boulder Cyberinfrastructure & Data management Thomas Hauser Director Research Computing CU-Boulder
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Workshop on HPC in India Grid Middleware for High Performance Computing Sathish Vadhiyar Grid Applications Research Lab (GARL) Supercomputer Education.
Unified theory of software evolution Reengineering – Business process reengineering and software reengineering BPR model – Business definition, process.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Structural Genomics – an example of transdisciplinary research at Stanford Goal of structural and functional genomics is to determine and analyze all possible.
1 Intellectual Architecture Leverage existing domain research strengths and organize around multidisciplinary challenges Institute for Computational Research.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
NICLS: Development of Biomedical Computing and Information Technology Infrastructure Presented by Simon Sherman August 15, 2005.
UC and the Power of Ten: A Time for Alignment David J. Ernst CIO and Associate Vice President UC Office of the President UCLA IT Planning Board February.
Corporation For National Research Initiatives NSF SMETE Library Building the SMETE Library: Getting Started William Y. Arms.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
© The Trustees of Indiana University Centralize Research Computing to Drive Innovation…Really Thomas J. Hacker Research & Academic Computing University.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
CI Days: Planning Your Campus Cyberinfrastructure Strategy Russ Hobby, Internet2 Internet2 Member Meeting 9 October 2007.
SAN DIEGO SUPERCOMPUTER CENTER NUCRI Advisory Board Meeting November 9, 2006 Science Gateways on the TeraGrid Nancy Wilkins-Diehr TeraGrid Area Director.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Molecular Science in NPACI Russ B. Altman NPACI Molecular Science Thrust Stanford Medical.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
DOE 2000, March 8, 1999 The IT 2 Initiative and NSF Stephen Elbert program director NSF/CISE/ACIR/PACI.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Responding to the Unexpected Yigal Arens Paul Rosenbloom Information Sciences Institute University of Southern California.
Russ Hobby Program Manager Internet2 Cyberinfrastructure Architect UC Davis.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Take Charge of Change MASBO Strategic Roadmap Update November 15th, 2013.
SEEK Welcome Malcolm Atkinson Director 12 th May 2004.
Tom Furlani Director, Center for Computational Research SUNY Buffalo Metrics for HPC September 30, 2010.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
1 DOE Office of Science October 2003 SciDAC Scientific Discovery through Advanced Computing Alan J. Laub.
Implementing a National Data Infrastructure: Opportunities for the BIO Community Peter McCartney Program Director Division of Biological Infrastructure.
| nectar.org.au NECTAR TRAINING Module 2 Virtual Laboratories and eResearch Tools.
Information Technology Services Strategic Directions Approach and Proposal “Charting Our Course”
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Switzerland joining EGEE 7 March, 2005, GSI- Darmstadt by CSCS, the Swiss National Supercomputing Centre.
Toward a common data and command representation for quantum chemistry Malcolm Atkinson Director 5 th April 2004.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
High Risk 1. Ensure productive use of GRID computing through participation of biologists to shape the development of the GRID. 2. Develop user-friendly.
All Hands Meeting 2005 BIRN-CC: Building, Maintaining and Maturing a National Information Infrastructure to Enable and Advance Biomedical Research.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Biomedical Informatics and Health. What is “Biomedical Informatics”?
1 EUROPEAN COMMISSION Tempus JEP – – 2006 Supporting and facilitating active uptake to Information and Communication Technologies for University.
VIEWS b.ppt-1 Managing Intelligent Decision Support Networks in Biosurveillance PHIN 2008, Session G1, August 27, 2008 Mohammad Hashemian, MS, Zaruhi.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
University of California Cloud Computing Task Force Russ Hobby.
ChinaGrid: National Education and Research Infrastructure Hai Jin Huazhong University of Science and Technology
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
Accessing the VI-SEEM infrastructure
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Clouds , Grids and Clusters
Tools and Services Workshop
Joslynn Lee – Data Science Educator
National e-Infrastructure Vision
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Introduce yourself Presented by
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Presentation transcript:

Institute for Digital Research and Education (IDRE) UCLA’s CI Vision Research CI DataNet Cyber Learning Institute for Digital Research and Education (IDRE) Institute for Informatics (I2) Common Collaboration And Learning Environment (CCLE)

Institute for Digital Research and Education (IDRE) The Institute for Digital Research and Education IDRE HPCIDRE STATSIDRE HASIS

Institute for Digital Research and Education (IDRE) IDRE Mission Statement(s) To accelerate discovery, the design and engineering of molecules, materials, and catalysts, and the understanding of complex processes, through simulations and through knowledge extraction of large digital data generated in simulations, experiments, imaging, DNA sequencing, and historical records. To support, advance and guide a campus wide program to position UCLA as a world leader in research and education in computational thinking using high performance computation, data visualization, and data analysis of large data sets and databases.

Institute for Digital Research and Education (IDRE) An overwhelming new wave of technology and complexity

Institute for Digital Research and Education (IDRE) What is IDRE? An ACTIVE community of faculty, researchers, and students with : Common interests and overlapping research and education goals A growing federation of Centers of Excellence e.g., LONI, FSC, IPAM, A growing repository of knowledge, resources, and experiences Components Members Mission Governance and organization Resources Events/Activities Research Education THE BENEFITS OF MEMBERSHIP IS THAT THERE IS NO “MEMBERSHIP” ONLY THE ADDED VALUE OF ACHIEVING YOUR RESEARCH AND EDUCATION GOALS.

Institute for Digital Research and Education (IDRE) What is IDRE? Governance Director Executive Committee Sets policy and priorities Ad Hoc Committees Oversight: VCR and VCIT Organization IDRE-HPC IDRE-HASIS IDRE-STAT

Institute for Digital Research and Education (IDRE) What is IDRE? Events/Activities Annual retreat sets themes for the year Speakers/Workshops Birds of a feather lunches Training Courses Initiatives Projects Focus Projects Annual call Funded Projects F ee for services Pilot Projects Directors level General Projects Reviewed by resource committee The areas of active research continually change as faculty members and IDRE research scholars respond to the landscape

Institute for Digital Research and Education (IDRE) What is IDRE? Resources MOU has been reached between VCIT and Director of IDRE FTE’s within ATS are now IDRE Research Staff who report up to the director of IDRE. These include ~10 PhDs. Data Center Space, system admin resources, and funds to build out to 1700 node capacity Hoffman 2 shared cluster Technology sandbox Computational lab space: Currently just walls. Visualization portal Additional resources to be acquired through grants

Institute for Digital Research and Education (IDRE) Domain Science Research Advancing the frontier of high-energy physics Achieving fusion energy Advancing an understanding of the brain Designing new materials: for nanoscale, energy efficiency, extreme conditions Designing new protein catalysts Predicting the earth’s weather and climate Understanding space weather Determining the source of the most energetic particles in the universe Determining the most fundamental forces of nature Bioinformatics Proteomics Medical imaging Preventing heart attacks Population genetics and evolution Organizational research and social networks Other Domain Research Digital cultural mapping Historic modeling Geographic Information Systems What is IDRE?

Institute for Digital Research and Education (IDRE) What is IDRE? Applied math, computer, and information science research Domain Specific Computing: Multi-core, GPUs, FPGAs Parallel Computing Code optimization and enhancement Algorithms Particle methods (PIC, meshless) Molecular dynamics Quantum Fluid dynamics Electromagnetics Automating communication patterns Software engineering: Developing and maintaining complex code Imaging Scientific Workflows Grid and Cloud Computing Statistical Modeling and Computing

Institute for Digital Research and Education (IDRE) What is IDRE? Education IDRE training courses Developing a post-doctoral mentorship program Young researchers sub group Using research codes in education Developing an digital humanities curriculum Cataloguing of computational science courses

Institute for Digital Research and Education (IDRE) The Shared Cluster Implementation at UCLA Campus General Purpose Cluster (160 Cores) Base Cluster (96 Cores) Contributed Clusters (space for ~ 12,000 Cores) Application Cluster (80 Cores) Research Virtual Shared Cluster Serial Open to all campus researchers for commercial parallel apps & parallel jobs Open to all campus researchers for commercial serial apps & serial jobs Parallel Researchers have guaranteed access to equivalent number of their contributed nodes for serial or parallel jobs with access to additional pooled serial or parallel cycles.

Institute for Digital Research and Education (IDRE) UCLA Shared Cluster build out through Faculty purchased nodes and storage Current – 450 nodes, 2,700 cores, 19 Tflops, 200TB of storage 20 different research groups – Physics, Neuroimaging, Economics, Astrophysics, Biostatistics, Chemistry, Engineering, Physiological Science. In the process of migrating 18 existing stand-alone clusters into shared Equivalent Flop to storage match Roughly 9Tflops of harvested cycles available to researchers. Already been as a pipeline for researchers to get access to NSF and DOE Facilities

Institute for Digital Research and Education (IDRE) UC Shared Cluster Pilot 10 campuses, 5 medical centers, SDSC, LBL High potential for regional and system capability and capacity UC Shared Cluster Pilot How to work as a UC system – non-trivial Build the experience base on a system shared resources Build the experience based with a shared regional data centers Build the business model Build the trust of the faculty researchers

Institute for Digital Research and Education (IDRE) Why A Shared Cluster Model: The Operational Perspective More efficient use of scarce people resources Standalone clusters have separate “everything”. Storage, head/interactive nodes, network, user space, configuration Higher overall performance than a standalone Cluster Harvesting unused compute cycles – in some cases only 30% of cycles are used on standalone systems, Allocation of unused cycles for important campus initiatives More efficient data center operations Better security Better staff utilization – 1.4 FTE – 7 x 32 node clusters vs..5 FTE 1 x 200 node cluster Better machine utilization – Gain % in unused cycles on stand alone clusters Better machine performance – Gain 30%+ of cycles lost to I/O wait on GigE vs. Infiniband Better code performance – Gain factor of 2-20 in utilization with optimized code Better data center efficiency – Data centers 3 – 4 x more efficient than ad hoc space – Regional data centers more efficient than distributed

Institute for Digital Research and Education (IDRE) Many System wide IT Infrastructure Projects Are Interrelated: UCLA is in a leadership position Shared Regional Data Centers Mainframe Sharing Consolidated Campus Data Centers Reciprocal Disaster Recovery Sites Systemwide Shared Research Computing Services Program New equipment can be located in shared regional data centers to relieve campus data center space constraints and achieve efficiencies. Once shared space becomes available in regional data centers, implementation of other initiatives that call for sharing infrastructure or services is greatly facilitated. Shared Research Computing Services Pilot