Garden of Architectures CSG Workshop May 2008 Jim Pepin CTO.

Slides:



Advertisements
Similar presentations
Gordon Bell Bay Area Research Center Microsoft Corporation
Advertisements

INDIANAUNIVERSITYINDIANAUNIVERSITY GENI Global Environment for Network Innovation James Williams Director – International Networking Director – Operational.
Computer As Personal Storage. Mass Storage We are people who have massive amounts of things to store. We need more room, more closets, more garages, more.
Agile Infrastructure built on OpenStack Building The Next Generation Data Center with OpenStack John Griffith, Senior Software Engineer,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
Xingfu Wu Xingfu Wu and Valerie Taylor Department of Computer Science Texas A&M University iGrid 2005, Calit2, UCSD, Sep. 29,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
What is Grid Computing? Grid Computing is applying the resources of many computers in a network to a single entity at the same time;  Usually to a scientific.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
The Future of Internet Research Scott Shenker (on behalf of many networking collaborators)
Lecture 1: Introduction CS170 Spring 2015 Chapter 1, the text book. T. Yang.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Global Connectivity Joint venture of two workshops Kees Neggers & Dany Vandromme e-IRG Workshop Amsterdam, 13 May 2005.
A Wanderer’s Guide to the Data Network Navigating Our Connected World Garrett Shields, GISP, CFM.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
and beyond Office of Vice President for Information Technology.
Small File File Systems USC Jim Pepin. Level Setting  Small files are ‘normal’ for lots of people Metadata substitute (lots of image data are done this.
Parallel and Distributed Systems Instructor: Xin Yuan Department of Computer Science Florida State University.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Tal Lavian Technology & Society More Questions Than Answers.
U.S. Department of Energy’s Office of Science High Performance Computing Challenges and Opportunities Dr. Daniel Hitchcock
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
Research and Educational Networking and Cyberinfrastructure Russ Hobby, Internet2 Dan Updegrove, NLR University of Kentucky CI Days 22 February 2010.
Scientific Computing Environments ( Distributed Computing in an Exascale era) August Geoffrey Fox
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Directed Reading 2 Key issues for the future of Software and Hardware for large scale Parallel Computing and the approaches to address these. Submitted.
Amy Apon, Pawel Wolinski, Dennis Reed Greg Amerson, Prathima Gorjala University of Arkansas Commercial Applications of High Performance Computing Massive.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
C5- IT Infrastructure and Emerging Technologies. Input – Process - Output 2 A computer  Takes data as input  Processes it  Outputs information CPU.
RENCI’s BEN (Breakable Experimental Network) Chris Heermann
C5- IT Infrastructure and Emerging Technologies Let us buy some Blade Servers!
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Russ Hobby Program Manager Internet2 Cyberinfrastructure Architect UC Davis.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
08/05/06 Slide # -1 CCI Workshop Snowmass, CO CCI Roadmap Discussion Jim Bottum and Patrick Dreher Building the Campus Cyberinfrastructure Roadmap Campus.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
David P. Anderson Space Sciences Laboratory University of California – Berkeley Public and Grid Computing.
CC-NIE Workshop Clemson University James Pepin (CTO) Integration award Build 10Gb to ‘lab/desktop’ in ~20 buildings(overlay) Use major.
Marv Adams Chief Information Officer November 29, 2001.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Giuseppe Andronico INFN Sez. CT / Consorzio COMETA Beijing,
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Big Data Directions Greg.
Datacenters and University Computing Jim Pepin. Why  Power/space/AC issues Cost of facilities build-outs Practicality in existing buildings  Complexity.
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
David P. Anderson Space Sciences Laboratory University of California – Berkeley Public Distributed Computing with BOINC.
Parallel IO for Cluster Computing Tran, Van Hoai.
Tackling I/O Issues 1 David Race 16 March 2010.
Possible Governance-Policy Framework for Open LightPath Exchanges (GOLEs) and Connecting Networks June 13, 2011.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
By SPEC INFOTECH. A programming language reigning the IT industry Marking its presence around the globe Striking Features which make Java supreme: Simplistic.
COMPUTER NETWORKS Quizzes 5% First practical exam 5% Final practical exam 10% LANGUAGE.
Hardware refers to the tangible parts of computer systems and typically includes support for processing, storage, input, and output. Hardware Processing.
Conclusions on CS3014 David Gregg Department of Computer Science
Accessing the VI-SEEM infrastructure
Grid and Cloud Computing
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Grid Computing.
University of Technology
Morgan Kaufmann Publishers
Garden of Architectures CSG Workshop May 2008
Presentation transcript:

Garden of Architectures CSG Workshop May 2008 Jim Pepin CTO

Disruptive change Doubling (Moore’s Law or …) –Transistors Multi-core –Disk capacity –New mass storage (flash, etc) –Parallel apps –Storage mgmt –Optics based networking

Disruptive Change Federated identity –Large V/O –Shared research/clinical spaces Team science/academics –Paradigm shift CI as a tool for all scholarship

Disruptive Change Lack of diversity in computing architectures –X64 has ‘won’ Maybe IBM/Power exists at edges Maybe Sun/SPARC at edges –This creates mono-culture Dangerous –Innovation here in consumer space Game boxes/phones drive here

Network Futures Optical Bypasses –Very high speed Low friction Low jitter Facilities based –GLIF examples –RONs –Exchanges

Network Futures “Security” is driving researchers away from us Are we the problem? –Where does ‘security’ belong? How do we do VOs with two port internet? Will we see our networks become ‘campus phone switch’ of the 2010s

Data futures Massive storage (really really big) Object oriented (in some cases) Preservation Provenance Distributed Blur between data bases/file systems –Meta data

New Operating Environments Operating systems in network –Grids ID management –But done poorly from integration view How to build petascale single systems –Scaling applications is biggest problem Training “Cargo Cult” systems and applications

New Operating Environments 100s of TF at campus (but how to use it and build it on campus) –Tied into national petascale systems –All the problems on terragrid and VOs on steroids. Network security friction points Identity management Non-homogenous operating environments

Computation Massively parallel –Many cores (doubling every 2-3 yrs) Commodity parts –Massive collections of nodes with high speed interconnect Heat and power density Optical on chip technology –Legacy code scales performs poorly (or worse)

Viz/remote access SHDTV like quality (4k) –Enables true telemedicine and robotic surgery –Massive storage ties to this –Optiputer project is example (CALIT2) –Colab spaces with true haptic and visual presence. Social sites are simple prototypes Large screen applications and tele-presence

Versus Old Code –Much based on 360/VAX/Name it Gaussian poster child –Vector optimized Static IT models –Network defenders in IT hurt researchers –Researchers don’t play with others well –Condo model evolving

Versus Thinking this is just for science/engineer –Large data –Interactive applications Social Science apps –Education outcomes at Clemson Large data, statistics on huge scale –Shoah Foundation at USC Massive data, networks, VO

Vision/Sales Pitch Access to various kinds of resources –Parallel high performance Can be in condo (depends on politics) –Flexible node configurations –Large storage of various flavors –Viz –Leading edge networks

“Clusters” Large collection of multi-core –High performance interconnect What makes cluster not just a bunch of nodes –Access to large data storage at parallel speeds Lustre SAM/QFS PVFS –Ability to put in large memory nodes

“Clusters” –Magic chips GPUs, FPGAs etc Botique today but gains can be enormous –Relation to desktops/local systems –How to integrate into national systems Identity/security/networking –Viz clusters Render agents Large scale, friction free networking

Storage Farms Diverse data models –Large streams (easy to do) –Large number of small files (hard to do) –Integrate mandates (security, preservation) –Blur between institution data and personal/research –Storage spans external, campus, departmental,local –Speed of light matters

Meaning of Life Much closer relations needed to central IT –Networks/identity mgmt/security/policy –But not just ‘at scale’ How to use the disruptive technologies –Core,GPUs,Cell,FPGA,Flash,optical networks –Disruptive software/services as well

Meaning of Life Build ecosystem of services –Some central, some local, some external –Not just computing, networks and storage –Our community has “gone global” The campus is not a castle. Earlier example of 8 social science faculty We have thousands of communities Can’t be one size fits all