3 Questions for Cluster and Grid Use

Slides:



Advertisements
Similar presentations
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
Advertisements

1 Multicore and Cloud Futures CCGSC September Geoffrey Fox Community Grids Laboratory, School of informatics Indiana University
Parallelization: Conway’s Game of Life. Cellular automata: Important for science Biology – Mapping brain tumor growth Ecology – Interactions of species.
1 Challenges Facing Modeling and Simulation in HPC Environments Panel remarks ECMS Multiconference HPCS 2008 Nicosia Cyprus June Geoffrey Fox Community.
Big Data and Clouds: Challenges and Opportunities NIST January Geoffrey Fox
1 1 Hybrid Cloud Solutions (Private with Public Burst) Accelerate and Orchestrate Enterprise Applications.
© 2006 Open Grid Forum Geoffrey Fox GFSG Meeting CWI Amsterdam December OGF eScience Function.
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
OpenQuake Infomall ACES Meeting Maui May Geoffrey Fox
© 2006 Open Grid Forum Geoffrey Fox September OGF eScience Function.
© 2006 Open Grid Forum Enabling Pervasive Grids The OGF GIN Effort Erwin Laure GIN-CG co-chair, EGEE Technical Director
High Performance Computing How to use Recommended Books Spring Semester 2005 Geoffrey Fox Community Grids Laboratory Indiana University 505 N Morton Suite.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
ISERVOGrid Architecture Working Group Brisbane Australia June Geoffrey Fox Community Grids Lab Indiana University
Applications and Requirements for Scientific Workflow Introduction May NSF Geoffrey Fox Indiana University.
Message Management April Geoffrey Fox Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN.
SALSASALSASALSASALSA Cloud Panel Session CloudCom 2009 Beijing Jiaotong University Beijing December Geoffrey Fox
1 Multicore for Science Multicore Panel at eScience 2008 December Geoffrey Fox Community Grids Laboratory, School of informatics Indiana University.
Edinburgh e-Science MSc Bob Mann Institute for Astronomy & NeSC University of Edinburgh.
Extreme Computing’05 Parallel Graph Algorithms: Architectural Demands of Pathological Applications Bruce Hendrickson Jonathan Berry Keith Underwood Sandia.
HPC in the Cloud – Clearing the Mist or Lost in the Fog Panel at SC11 Seattle November Geoffrey Fox
1 Cloud Systems Panel at HPDC Boston June Geoffrey Fox Community Grids Laboratory, School of informatics Indiana University
Matthew Farrellee Computer Sciences Department University of Wisconsin-Madison Condor and Web Services.
Directions in eScience Interoperability and Science Clouds June Interoperability in Action – Standards Implementation.
© 2006 Open Grid Forum Geoffrey Fox OGF Workshop eScience 2006 Royal Tropical Institute Amsterdam December OGF eScience Function.
Big Data Open Source Software and Projects ABDS in Summary II: Layer 5 I590 Data Science Curriculum August Geoffrey Fox
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Geoffrey Fox Panel Talk: February
Panel: Beyond Exascale Computing
Community Grids Laboratory
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Clouds , Grids and Clusters
.NET Framework 2.0 .NET Framework 3.0 .NET Framework 3.5
Welcome: Intel Multicore Research Conference
CSE 775 – Distributed Objects Submitted by: Arpit Kothari
NGS computation services: APIs and Parallel Jobs
University of Technology
NSF : CIF21 DIBBs: Middleware and High Performance Analytics Libraries for Scalable Data Science PI: Geoffrey C. Fox Software: MIDAS HPC-ABDS.
iSERVOGrid Architecture Working Group Brisbane Australia June
Geoffrey Fox, Huapeng Yuan, Seung-Hee Bae Xiaohong Qiu
I590 Data Science Curriculum August
Some remarks on Portals and Web Services
Data Science Curriculum March
Biology MDS and Clustering Results
Data Science for Life Sciences Research & the Public Good
GCC2005 and the Harmony and Prosperity of Civilizations
4 Education Initiatives: Data Science, Informatics, Computational Science and Intelligent Systems Engineering; What succeeds? National Academies Workshop.
Clouds from FutureGrid’s Perspective
What is OGSA? GGF17 OGSA and Alternative Grid Architectures Panel
The two faces of Cyberinfrastructure: Grids (or Web 2
Panel: Revisiting Distributed Simulation and the Grid
Gateway and Web Services
Remarks on Peer to Peer Grids
Cyberinfrastructure and PolarGrid
Services, Security, and Privacy in Cloud Computing
Department of Intelligent Systems Engineering
MPJ: A Java-based Parallel Computing System
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
PolarGrid and FutureGrid
Status of Grids for HEP and HENP
Vrije Universiteit Amsterdam
Current and Future Perspectives of Grid Technology Panel
Panel on Research Challenges in Big Data
Digital Science Center
Chemical Informatics and Cyberinfrastructure Collaboratory
Big Data, Simulations and HPC Convergence
Question 1 How are you going to provide language and/or library (or other?) support in Fortran, C/C++, or another language for massively parallel programming.
CReSIS Cyberinfrastructure
Convergence of Big Data and Extreme Computing
Presentation transcript:

3 Questions for Cluster and Grid Use Asheville North Carolina September 11 2006 Geoffrey Fox Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN 47401 http://grids.ucs.indiana.edu/ptliupages/presentations/ gcf@indiana.edu http://www.infomall.org

Is the work and activities of Global Grid Form (GGF/OGF) of any value to CCGSC Community? Currently OGF is meeting (September 11-14) with Development of standards Varied collection of “forums” including workshops – it now has a specific eScience “function” (also Standards and Enterprise functions) For example, OGF19 (North Carolina Jan 29 – Feb 2) one day workshops Web 2.0 and the Grid Federated Identity in Grids and (Virtual) Organizations Currently very few OGF standards used but many W3C, OASIS standards are also not used Mashups are composed from JavaScript, AJAX and REST and not BPEL WSDL and SOAP (http://www.programmableweb.com/matrix 270 APIs 1000 Mashups) GridFTP popular (BitTorrent more popular); BES (Basic Execution Services) with JSDL (Job Specification Language) likely to be popular “Standardly” available software like GT4 Condor and SRB are used Wide use of standards should help users as allows more sustainable software with multiple interoperating “vendors” Could OGF activities be tweaked/redirected to be of more value? Measurement: Number of OGF Standards used in real Grids Measurement: Number of (eScience) attendees at OGF meetings

Can we survive/make use of innovative multicore programming models I? Multicore chips will generate programming models that are optimized for use in “broad applications” (aka “Microsoft Word”) and scaling up to eventually to some 128 cores Fast thread switching operations with microsecond latency It is unlikely that openMP and MPI will be dominant “broad programming” models as these optimized for different criteria ParalleX (LSU), Software Transactional Memory STM, Microsoft’s Concurrency and Coordination Runtime CCR, and Functional Languages are approaches that have pretty different concurrency models from MPI/openMP Don’t know how CCA and HPCS Languages sit with respect to multicore and its application base One will get far-out “broad programming models” as well as those aimed at evolution over next 5-10 years

Can we survive/make use of innovative multicore programming models II? Are these new models a distraction or an opportunity for scientific computing? Can we / Should we produce a new generation of hybrid programming models that say span MPI to CCR? Could open up applications that didn’t work so well on traditional clusters/programming models Discrete event simulation with Time Warp etc. is one example Tree algorithms like Branch&Bound and Computer Chess will run well Measurements: Number of new applications enabled by multicore Number of applications using “new” programming models Performance of “new” programming models (and their run time)

Do we need more expertise/ education/ consultants in parallel/concurrent computing? There is a plethora of students with Grid and Internet technology skills Few computer Science students (at Indiana) take more than an optional introductory parallel computing class Application scientists are using parallel computers (clusters) but in my experience often use rather inefficient naïve algorithms e.g. O(N2) not O(NlogN) algorithms e.g. misuse of MPI collective communications Probably DoE has plenty of experts Do we need more students and more classes taught by more faculty? Do we need more “consultants” Measurements: Quality of Code produced by “new” users Number of Students graduating per year in various parts (CS and applications) of parallel computing