Do’s and Don’ts of Building Grand Challenge Application Teams Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U.

Slides:



Advertisements
Similar presentations
INDIANAUNIVERSITYINDIANAUNIVERSITY GENI Global Environment for Network Innovation James Williams Director – International Networking Director – Operational.
Advertisements

Conference xxx - August 2003 Fabrizio Gagliardi EDG Project Leader and EGEE designated Project Director Position paper Delivery of industrial-strength.
What I really want from networks NOW, and in 5-10 years time The Researchers View Ed Seidel Max-Planck-Institut für Gravitationsphysik.
Spark NH Council Member Survey October – November, 2012.
What do we currently mean by Computational Science? Traditionally focuses on the “hard sciences” and engineering –Physics, Chemistry, Mechanics, Aerospace,
1 Ideas About the Future of HPC in Europe “The views expressed in this presentation are those of the author and do not necessarily reflect the views of.
EInfrastructures (Internet and Grids) US Resource Centers Perspective: implementation and execution challenges Alan Blatecky Executive Director SDSC.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
The Challenge … children die each day due to poverty.
Designing Online Communities: If We Build it, Will They Come? Yvonne Clark Instructional Designer Penn State University.
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
FP6 Thematic Priority 2: Information Society Technologies Dr. Neil T. M. Hamilton Executive Director.
Copyright 2005 Northrop Grumman Corporation 0 Critical Success Factors for system-of-system architecture / engineering 25 October 2006 Neil Siegel Sector.
CS 597 Your Ph.D. at USC The goal of a Ph.D. What it takes to achieve a great Ph.D. Courses Advisor How to read papers? How to keep up-to-date with research?
1 Ideas About the Future of HPC in Europe “The views expressed in this presentation are those of the author and do not necessarily reflect the views of.
The Future of Internet Research Scott Shenker (on behalf of many networking collaborators)
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
SCHOOL OF INFORMATION UNIVERSITY OF MICHIGAN Success Factors for Collaboratories Gary M. Olson Collaboratory for Research on Electronic Work School of.
Knowledge Environments for Science: Representative Projects Ian Foster Argonne National Laboratory University of Chicago
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Internet 2 Corporate Value Proposition Stuart Kippelman (J&J) Jeff Lemmer (Ford) December 12, 2005.
GridLab A Grid Application Toolkit and Testbed IST Jarek Nabrzyski GridLab Project Coordinator Poznań.
Advancing Computational Science in Academic Institutions Organisers: Dan Katz – University of Chicago Gabrielle Allen – Louisiana State University Rob.
Wednesday Working Group Topics 1.Collaboratory Development: –Community issues, workshops, cross-disciplinary communications, development of cross-program.
Distributed EU-wide Supercomputing Facility as a New Research Infrastructure for Europe Gabrielle Allen Albert-Einstein-Institut, Germany Jarek Nabrzyski.
WG Goals and Workplan We have a charter, we have a group of interested people…what are our plans? goalsOur goals should reflect what we have listed in.
Changing Science and Engineering: the impact of HPC Sept 23, 2009 Edward Seidel Assistant Director, Mathematical and Physical Sciences, NSF (Director,
A new start for the Lisbon Strategy Knowledge and innovation for growth.
Results of the HPC in Europe Taskforce (HET) e-IRG Workshop Kimmo Koski CSC – The Finnish IT Center for Science April 19 th, 2007.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
Enhancing formal and professional training capacity in Biodiversity Informatics: Collaboration and funding opportunities Dimitris Koureas Natural History.
Russ Housley IETF Chair Internet2 Spring Member Meeting 28 April 2009 Successful Protocol Development.
ATTRACT – From Open Science to Open Innovation Information Sharing Meeting Brussels, June 19, 2014 Markus Nordberg (CERN) Development and Innovation Unit.
Russ Hobby Program Manager Internet2 Cyberinfrastructure Architect UC Davis.
1 eDIKT, NeSC & AstroGrid what is eDIKT? how do NeSC and eDIKT fit together? eDIKT/NeSC and AstroGrid Phase B?
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
National Center for Supercomputing Applications Barbara S. Minsker, Ph.D. Associate Professor National Center for Supercomputing Applications and Department.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
HPC Centres and Strategies for Advancing Computational Science in Academic Institutions Organisers: Dan Katz – University of Chicago Gabrielle Allen –
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
Team Assessment In software development, teams are how we accomplish more and better than what can be accomplished by an individual.
New and Cool The Cactus Team Albert Einstein Institute
Marv Adams Chief Information Officer November 29, 2001.
1. Visualization and Collaboration and Network Evolution Increasing importance – a continuing and big impact – For example: CarletonU’s Eucalyptus project.
EGEE JRU Workshop, Brussels, December 1 st, JRU RoGrid – Romanian Experience RoGrid Consortium RoGrid JRU set-up RoGrid JRU Implementation Internal.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
Joint Biodiversity, Terrestrial Ecology and Related Applied Sciences Education and Public Outreach Breakout Session Monday August 2, 2006.
PIRE I & II Post-Award Activities OISE PO team working --To help make the projects succeed, --To help OISE and all of NSF learn more about the projects,
Ian Bird CERN, 17 th July 2013 July 17, 2013
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
New and Cool The Cactus Team Albert Einstein Institute
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI strategy and Grand Vision Ludek Matyska EGI Council Chair EGI InSPIRE.
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
Advisory Board Meeting March Earth System Science.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
NETWORKS OF EXCELLENCE KEY ISSUES David Fuegi
FISCO2 – Financial and Scientific Coordination Work Package dedicated to ENSAR2 management WP leader: Ketel Turzó WP deputy: Sandrine Dubromel ENSAR2 Management.
International Online Virtual Grand Prix Racing An exciting maths project for schools in international partnerships Val Brooks.
Driving Innovation V Power electronics – Enabling a resilient energy system, KTP thematic competition Christian Inglis – energy supply team Creating.
The Cactus Team Albert Einstein Institute
Collaborations and Interactions with other Projects
Exploring Distributed Computing Techniques with Ccactus and Globus
Building a Cyberinfrastructure Culture: IT as a Partner in Research
It’s all about people Data-related training experiences from EUDAT, OpenAIRE, DANS Marjan Grootveld, DANS EDISON workshop, 29 August 2017.
Presentation transcript:

Do’s and Don’ts of Building Grand Challenge Application Teams Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U of Illinois Co-Chair, GGF Applications Working Group Things I wish I could do with (or to) my Grand Challenge Projects…

My experiences: what can we learn? Six large scale projects NSF BH Grand Challenge NASA NS Grand Challenge NSF KDI Astrophysical Simulation Collaboratory Project EU Astrophysics Network (5th Framework Program) EU GridLab Project German DFN-Verein TiKSL/GriKSL Projects They are largely about (MultiDisciplinary) Community Building Somewhat overlooked, even by the PIs Examples of Future of Science & Engineering Require Large Scale Simulations, beyond reach of any machine Require Large Geo-distributed Cross-Disciplinary Collaborations Require Grid Technologies, but not yet using them!

Black Hole Grand Challenge Alliance ($4.5M, Original NSF GC Program) Background/Goals: 8 US Institutions, Solve problem of colliding black holes (try…) Bring together computer and physical scientists to solve problem on HPC hardware Develop a community Problems: Difficult community: money brought them together No pre-existing Infrastructure for Computational Collaborations CS and Physicists had trouble together Not enough cycles: where was the TFlop? Successes: Community came closer together (though somewhat scarred) Learned what we needed: Computational tools like Cactus, GrACE, came out of lessons learned Bandwidth needs: very low , remote login, web pages (very new!)

Neutron Star Grand Challenge ($1.4M, NASA Round 2) Background/Goals 5 Institutions (Develop infrastructure to) solve problem of colliding NSs Issues: Personality clashes Infrastructure (Cactus, GrACE under development) Computer Scientist in Charge of Science Project Project not seen as very successful in astro community: “Where’s the physics?” But: Project excessively performance milestone based –Q: “How can you cut our postdoc funding?? Must do Physics!” –A: “If you achieve 100GF we’re pretty sure you’ll find a way to do some physics…” Successful, but mixed, perhaps even did some damage… Bandwidth needs: minimal, but: could have been much more (remote Viz, etc, too hard for people, but should use!)

Astrophysics Simulation Collaboratory ($2.2M NSF KDI Program) Background/Goals 4 US Institutions + German Projects Basically a Technology program with application Driver –Portal, AMR, NS collapse problem Issues: Technologist or Scientist in charge? Deployment of Technologies Difficult Community Acceptance –Scientists need this, but don’t get it –Criticized for using word “Collaboratory” in NRAC proposal! Bandwidth Needs Should be much higher than they are! Catch 22 again…

German TiKSL/GriKSL Projects (2.5MDM DFN-Verein) Background/Goals Develop remote Viz/steering/collaborative simulation, distributed computing capabilities Successes Wonderful technology; all works! All research/dev steered by application needs Incredible Matching Effort: –Embedded in physics research group. Have a dozen physics postdocs/students in Potsdam, forced to use the stuff! –Leverage: Tightly coupled to ASC Project –Visitor Program supplements effort considerably Problems Far too little travel money! I have to supplement to make it work! Technologies never quite mature enough for easy adoption by community Even in my group, people very reluctant to waste time Bandwidth Needs Aimed to drive high speed networking, Gbit networks easily pushed (Shalf Gigabit Challenge Award…)

EU Astrophysics Network (€1.5M, EU 5th Framework Programme) Background/Goals 10 EU Institutions, 3 years Solve same problems, build on previous works Build/train community Problems No EU Computing centers, policies, etc Level of computational expertise in apps groups very low compared to US (OK: train them right from beginning!) Cultural differences much bigger Successes/Advantages Draw on/integrate individual strengths: no forced march People see scientific advantage of working together Existing Collaborative Infrastructure! Leveraging all the above: Cactus, ASC, etc… Bandwidth Needs: growing, but people make due… Want conferencing for collabs, training Could use grid technologies for science: Catch 22: bandwidth not there, so don’t push…

GridLab Project and Others like it (€5M, EU IST Programme) Background/Goals Co-develop innovative Grid infrastructure and applications/experiments: –Cactus, Triana, Grav. Wave Astro, others –Bring others in later Use other Apps projects for testing Grass roots effort: Egrid testbed came first Success (Not started yet): Created excited community Brussels agreed to send money to US!!! Problems: Excessive regulation, control by Brussels Hard to find experienced people Lack of applications No money for conferencing facilities, coordinating with other projects… Bandwidth needs: can be very large!!

Summary of Issues The Obvious: Basic “Application-Driving-Technology” Model is Correct Need, must encourage application teams for high bandwidth grid apps to drive program Chair of Apps group of GGF: “Small, small fraction of groups using the grid!!!” Need programs like this to force centers to provide capabilities How to Achieve Real Collaboration/Communication in such projects? Basic Principle: People do NOT naturally communicate, projects always confused Push, encourage, fund collaborative technologies (makes a huge difference) –VTCs, Full scale AG nodes, Smaller scale way to connect –Better if embedded in real groups: not just developed in void! How to do this? –Don’t forget obvious: time difference can be significant hindrance, sometimes advantage Adequate, and GENEROUS travel allowances: many projects strangled Explicitly ask proposers to explain how they will work coherently, how they will use/make used the technology

How to achieve real leverage within projects? Real Progress requires real effort, real people People typically too busy to do their jobs Don’t hire 10 people at 10% each! Visitor money very important (single most important in my experience) Exchanges between project members at different sites Significant Matching/Embedding can be good sign: (e.g. my TiKSL project…) Need to encourage strong PIs Apps teams headed by apps people CS teams headed by CS people Need good standing in community, good social skills

Achieving Leverage Between Projects Encourage, provide specific mechanism for clustering/linking App/Infrastructure balance in single project is good, BUT Explicit linkage in between mostly tech and mostly apps good, too –People may work better, focus better this way –Technology projects without apps groups sometime have no rudder, go awry –Must couple them, either by PI design or Agency “encouragement” –Provide money for exchanges, travel, joint meetings, etc Cross Grand Challenge/eScience/Infrastructure Workshops –Generally people don’t know what each other do, or how to use it Encourage collaboration with schools of sociology, psychiatry… Interagency links should be encouraged Pair up with EU, Asian agencies: this is a global world! Get Centers closely involved in such projects Somehow encourage projects to force centers to provide needed services Dedicate person at centers to consult/aid/watch over, as well as resources: bandwidth, disk, CPU, etc…

More General Educational Mission Major emphasis of EU Network, good idea Appalling amount of ignorance/lack of imagination out there in apps! 2 Groups to educate: –Apps community –New Generation of Apps people –Struggling to find their place: it is here. Even within projects, people do not try to use the technology!! Must provide adequate support for prototype ---> production Testing Documentation, support Old NCSA Problem: lots of hardware, not as many people to develop/use/support it (and it is better than other places!)

Final Suggestions/Thoughts Reasonable milestones for focus, but not smothering requirements Good Balance between engineering approach (large coordinated machine) and individual research freedom: encourage people to make sure they fit together Allow adequate administrative support Encourage people to be ambitious Allow risky proposals through! Get participants to think Big, understand their responsibilities to puch communities forward