API, Interoperability, etc.  Geoffrey Fox  Kathy Benninger  Zongming Fei  Cas De’Angelo  Orran Krieger*

Slides:



Advertisements
Similar presentations
Geoscience Information Network Stephen M Richard Arizona Geological Survey National Geothermal Data System.
Advertisements

© 2012 Open Grid Forum Simplifying Inter-Clouds October 10, 2012 Hyatt Regency Hotel Chicago, Illinois, USA.
The Datacenter Needs an Operating System Matei Zaharia, Benjamin Hindman, Andy Konwinski, Ali Ghodsi, Anthony Joseph, Randy Katz, Scott Shenker, Ion Stoica.
Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
Information Types and Registries Giridhar Manepalli Corporation for National Research Initiatives Strategies for Discovering Online Data BRDI Symposium.
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
EInfrastructures (Internet and Grids) US Resource Centers Perspective: implementation and execution challenges Alan Blatecky Executive Director SDSC.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
Secure Cyber Incident Information Sharing UTSA Team Leads Dr. Ram Krishnan, Assistant Professor, ECE Dr. Ravi Sandhu, Executive Director, ICS April 30,
 Amazon Web Services announced the launch of Cluster Compute Instances for Amazon EC2.  Which aims to provide high-bandwidth, low- latency instances.
George A. Komatsoulis, Ph.D. National Center for Biotechnology Information National Library of Medicine National Institutes of Health U.S. Department of.
GenSpace: Exploring Social Networking Metaphors for Knowledge Sharing and Scientific Collaborative Work Chris Murphy, Swapneel Sheth, Gail Kaiser, Lauren.
Teula Morgan The Adaptable Repository: Swinburne Online Journals.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
NGNS Program Managers Richard Carlson Thomas Ndousse ASCAC meeting 11/21/2014 Next Generation Networking for Science Program Update.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Cyberinfrastructure Supporting Social Science Cyberinfrastructure Workshop October Chicago Geoffrey Fox
“Creating Data Repositories..” Sanjay Rao ECE Dept, Purdue University.
Data to Discovery The iPlant Collaborative Community Cyberinfrastructure for Life Science Nirav Merchant iPlant / University.
Managing Sustainability Solutions Initiative (SSI) data Kate Beard, Steve Cousins University of Maine NERACOOS/NECOSP Data Management Workshop, Sept. 26,
Microsoft Research Faculty Summit Paul Watson Professor of Computer Science Newcastle University, UK.
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
Ohio State University Department of Computer Science and Engineering 1 Cyberinfrastructure for Coastal Forecasting and Change Analysis Gagan Agrawal Hakan.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
European Grid Initiative Federated Cloud update Peter solagna Pre-GDB Workshop 10/11/
Integrating Cloud & Cyberinfrastructure Manish Parashar NSF Cloud and Autonomic Computing Center Rutgers, The State University of New Jersey.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
The Brain Project – Building Research Background Part of JISC Virtual Research Environments (Phase 3) Programme Based at Coventry University with Leeds.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
Sponsored by the National Science Foundation GENI Goals & Milestones GENI CC-NIE Workshop NSF Mark Berman January 7,
CyberInfrastructure workshop CSG May Ann Arbor, Michigan.
Convert generic gUSE Portal into a science gateway Akos Balasko 02/07/
1 Computing Challenges for the Square Kilometre Array Mathai Joseph & Harrick Vin Tata Research Development & Design Centre Pune, India CHEP Mumbai 16.
ISERVOGrid Architecture Working Group Brisbane Australia June Geoffrey Fox Community Grids Lab Indiana University
Lessons About Sustainability Learned from the Open Science Data Cloud Robert Grossman University of Chicago & Open Cloud Consortium.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI (Present and) Future of the EGI Services for WLCG Peter Solagna – EGI.eu.
E u r o p e a n C o m m i s s i o nCommunity Research Global Change and Ecosystems EU environmental research : Part B Policy objectives  Lisbon strategy.
Award # funded by the National Science Foundation Award #ACI Jetstream: A Distributed Cloud Infrastructure for.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Funding: Staffing for Research Computing What staffing models does your institution use for research computing? How does your institution pay for the staffing.
26/05/2005 Research Infrastructures - 'eInfrastructure: Grid initiatives‘ FP INFRASTRUCTURES-71 DIMMI Project a DI gital M ulti M edia I nfrastructure.
Digital Ecosystems Re-tuning the user requirements after 3 years Digital Ecosystems Re-tuning the user requirements after 3 years Towards Business Cases.
Implementing a National Data Infrastructure: Opportunities for the BIO Community Peter McCartney Program Director Division of Biological Infrastructure.
Applications and Requirements for Scientific Workflow May NSF Geoffrey Fox Indiana University.
Big Data in the Geosciences, University Corporation for Atmospheric Research (UCAR/NCAR), and the NCAR Wyoming Supercomputing Center (NWSC) Marla Meehl.
Evolving Security in WLCG Ian Collier, STFC Rutherford Appleton Laboratory Group info (if required) 1 st February 2016, WLCG Workshop Lisbon.
Applications and Requirements for Scientific Workflow May NSF Geoffrey Fox Indiana University.
Technology-enhanced Learning: EU research and its role in current and future ICT based learning environments Pat Manson Head of Unit Technology Enhanced.
Directions in eScience Interoperability and Science Clouds June Interoperability in Action – Standards Implementation.
SCEC: An NSF + USGS Research Center Focus on Forecasts Motivation.
Cloud-based e-science drivers for ESAs Sentinel Collaborative Ground Segment Kostas Koumandaros Greek Research & Technology Network Open Science retreat.
Servizi di brokering Valerio Venturi CCR Giornata di formazione dedicata al Cloud Computing 6 Febbraio 2013.
1. 2 Quick Background I have an ecological background but I strayed……and ended up in computer science The good news is I have been able to blend the two.
FitSM: lightweight standards for service management in federated cloud Owen Appleton FedSM project / Emergence Tech Limited HN Interoperability workshop,
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
SCI-BUS is supported by the FP7 Capacities Programme under contract no. RI Introduction to Science Gateway Sustainability Dr. Wibke Sudholt CloudBroker.
Campus Grids Working Meeting Report Rob Gardner University of Chicago OSG All Hands March 10, 2010.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Onedata Eventually Consistent Virtual Filesystem for Multi-Cloud Infrastructures Michał Orzechowski (CYFRONET AGH)
Building a CMMI Data Infrastructure
Matt Link Associate Vice President (Acting) Director, Systems
Building a CMMI Data Infrastructure
XSEDE’s Campus Bridging Project
Current and Future Perspectives of Grid Technology Panel
Presentation transcript:

API, Interoperability, etc

 Geoffrey Fox  Kathy Benninger  Zongming Fei  Cas De’Angelo  Orran Krieger*

 The (rapidly) changing technology suggest changing and standardizing APIs  Need to choose APIs to fit the emerging usage paradigm and fit with future technologies  API specifications, standards and implementations are needed for access and control of: ◦ compute (OCCI) ◦ Storage (CDMI-SNIA) ◦ Networking (implementations: OESS, FOAM, OSCERS)  There may need to be different APIs for users, (web/gateway/app) developers and sysadmins

 The challenge is exposing capabilities to the user  Commercial CI use is focused on object technology, which is different from academic implementations.  Academic community moving toward OpenStack (observation)  Commercial interfaces are often proprietary eg. AWS has building blocks, doesn't adapt to user needs  XSEDE Science Gateway is example for simplified user access as Gateway development benefits from APIs  Using APIs needs to be simple for the user!!

 Observation was made that there is a close relationship between specifying a system and specifying the workflow

 What are the critical aspects that are a clear win/how much of federation and distribution should be exposed to developers ◦ For performance ◦ To keep the exposed environment simple for usability  Exposing federation and distribution is powerful but complex  Note that industry does not expose the federation extraction to users

 Need research to understand when to give an API for a significant capability - when to hide the complexity and when to expose it  Need a list of use cases and scenarios to explore and investigate this

 Instrumentation and Monitoring (I&M)…  I&M is needed as part of the infrastructure and access should be available via API (of course!)  Data/logs from I&M capability need to be housed in a central repository (example is the XDMOD from XSEDE)  Note: some NSF funded projects don’t provide this for one reason or another; should be explicit in funding awards/agreements that this information should be provided in public domain (with sanitization as required)

 I&M continued  Standards and metadata for storage of the I&M data is needed  Need to investigate ways to correlate and understand this data  Discover and document full workflows for example from Jobs, data xfer, file system usage, application/tool usage, etc.  Need to investigate ways to visualize this complex data into information

 Cloud technology came first from industry  Research and education (R&E) community playing catch up and discovering new ways to provide and use cloud  Need to investigate and understand the R&E cloud business model from both the academic and NSF perspective  Would/does NSF allow awards to include cost of use of cloud resources like buying cluster equipment today?

 Orran gave his example of the cloud project funded by Commonwealth of Mass to put together a cloud resource supported by approximately 10 universities and industry  Also was mentioned that some universities are now investigating and interested in getting their compute from the cloud instead of local investments  this is in infancy and economic models need to be understood and discussed further