Presentation is loading. Please wait.

Presentation is loading. Please wait.

Stone Soup Workshop: Research Computing Redux. Setting the Stage  Goals and Outcomes  Who we are  External Contexts CyberInfrastructure Federal agencies.

Similar presentations


Presentation on theme: "Stone Soup Workshop: Research Computing Redux. Setting the Stage  Goals and Outcomes  Who we are  External Contexts CyberInfrastructure Federal agencies."— Presentation transcript:

1 Stone Soup Workshop: Research Computing Redux

2 Setting the Stage  Goals and Outcomes  Who we are  External Contexts CyberInfrastructure Federal agencies and national labs PACI’s Grids

3 Workshop Goals  Share approaches to enterprise support of research computing Resources Services  Understand how campuses are approaching faculty using national resources and participating in virtual organizations  Identify need/value of ongoing flywheeled discussions

4 Who we are…  Central vs departmental/distributed  Technical vs management vs user support  Steady vs grant funding  …

5 Cyberinfrastructure  A broad definition, centering around computational, data and networking resources, but with dimensions of workforce readiness, etc.  The Atkins report to PITAC and NSF as the blueprint  The SCI division of CISE as one part of the implementation  The budget and the associated programs yet to emerge.  NMI and PACI as the models…

6 Federal agencies  The well-spring of much research computing support  Campus scientists funded to solve agency research problems, collaborating with agency and lab personnel and using lab data and computers  Used to fund research networks, though that is diminishing  Many of the big agencies – NSF, DOE, NASA. NIH – support both traditional and Grid-oriented major initiatives  Coordination is loose and revolves around two or three committees – LSN, MAGIC, JET

7 External computing resources  A few discipline specific – NCAR, NASA Ames, etc.  The PACI’s – SDSC, PSC, NCSA  State capacities – Ohio, North Carolina, California  National Labs Minnesota High Performance Army Maui DOE Energy Sites: LBL, Argonne, Los Alamos, etc.  International – CERN, Radar Telescopes, etc – primarily remote instruments, but with massive data processing needs

8 Grids: Hype, Reality and Hyperreality  Intent and lineage  Interrealm and intra-realm Grids  Standards and code  Major deployments Infrastructure User communities  International perspective  Integration with the enterprise

9 Intent and Lineage  Create a consistent and coordinated computing environment using widely distributed and heterogenous resources Later extended to apply to data sets and remote instrumentation  Widely and loosely used term and concepts since operating systems first got boring  Branded as a specific architecture in the Kesselman and Foster book, and then a specific instantiation of that architecture in a set of code called Globus  Today a confusing set of architectures, organizations, and code bases

10 Grids today  Perceived as the only viable answer to: Physical limits in traditional computing approaches Funding limits for scientific instruments Scaling issues in massive data sets  A set of major funded and highly visible science projects in the US and Europe  A set of buzz-erds – Grids Today, random conferences, etc.  A challenged standards process  A tangled set of code alternatives

11 Interrealm and intra realm Grids  Inter-realm Traditional model of distributed systems, located in autonomous realms, being harnessed as a uniform resource for users in those realms and external virtual organization users. Exposes numerous AAA issues, as well as policy dimensions to scheduling, data staging, etc.  Intra-realm (Enterprise) Harnessing the resources within an enterprise to either serve the high-end needs of the enterprise (Boeing) or as an outsourced service provider (IBM on-demand..) No longer needs open standards for AAA, and simplified OS issues Might require an external web service interface

12 Standards and code  Globus as the “de facto” standard GT3 is the current version; related to NMI releases There are deviant paths based on GT2 And other distinct code bases… And commercial stand-alone product  Lots of add in modules with complex interactions  Increasing use of proxies and portals to hide the complexity  Global Grid Forum, standards and meetings  Enterprise Grid Alliance  OGSA and WSRF; GGF and OASIS

13 Major deployments  Infrastructure Teragrid DOE Grid NEES Grid NASA Grid  User communities Physicists Energy researchers Medical researchers, chemists, geologists to come Plans to extend to broad communities such as undergrads and school kids…

14 International Perspective  Several major, apparently successful efforts in Europe, many revolving around CERN; one of the highlights of the EU; good showcases in Asia  UK e-Science is a major set of programs  Less expectation of leveraging the enterprise  Simpler scaling issues  Partnerships with US are essential

15 Grim realities  Code base is complex, changing, and incomplete  Standard gaps are numerous and in critical spots  Sharing is hard  Teragrid security incident  Deadlines slip and gross simplifications are needed  And yet, IMHO, they need to be mastered.

16 Integration points with the enterprise  The desktops sit on a campus  The users have primarily campus orientations  The users tend to have significant campus prominence  Frequently the resources sit on campuses  Frequently the resources are jointly owned and operated by a virtual organization and a real organization


Download ppt "Stone Soup Workshop: Research Computing Redux. Setting the Stage  Goals and Outcomes  Who we are  External Contexts CyberInfrastructure Federal agencies."

Similar presentations


Ads by Google