Download presentation
Presentation is loading. Please wait.
Published byRobert Hill Modified over 9 years ago
1
Purdue RP Highlights TeraGrid Round Table September 23, 2010 Carol Song Purdue TeraGrid RP PI Rosen Center for Advanced Computing Purdue University
2
Infrastructure udpates Steele moved to a new data center New addition to community clusters – the Rossmann cluster –Installed, up and running in early Sept. –Currently 8800+ cores –HP ProLiant DL165 G7 nodes with dual 12-core AMD Opteron 6172 processors (24 cores per node) –48 GB or 96 GB RAM, and 250 GB of local disk on each node –10Gbit Ethernet –150 TB Lustre filesystem Purdue Condor resource now have ~ 42,300 cores TeraGrid Round Table, 5/20/2010
3
Storage DC-WAN mounted and used at Purdue –Working on Lustre lnet routers to reach compute nodes by the end of October. –Installing a system to act as a Lustre WAN router now, and expect to have necessary configuration in place during IU's next DC-WAN maintenance window. –Testing and deployment to follow Distributed Replication Service –Sharing spinning disk to DRS today –Investigating integration with Hadoop Filesystem (HDFS) TeraGrid Round Table, 5/20/2010
4
New Developments in Condor Pool Virtual Machine “Universe” TeraGrid Round Table, 5/20/2010 Running on student Windows labs today – with VMWare Integrating now: KVM and libVirt on cluster (Steele) nodes
5
TeraGrid Round Table, 5/20/2010 VM Controller Summer project Undergrad student summer intern (E. Albersmeyer from CIT Technology) Skills Used C# Programming VMware Knowledge UNIX Admin Knowledge Windows 7 Control VM state (on, off depending on user activity) Provide usage information to system owner – important in a community resource environment Study performance trade-offs
6
Cloud Computing: Wispy Purdue staff operating experimental cloud resource –Built with Nimbus from UC –Current Specs 32 nodes (128 cores): –16 GB RAM –4 cores per node –Public IP space for VM guests Available for allocation in POPS now Use case TeraGrid Round Table, 5/20/2010
7
Wispy – Use Cases Used in Virtual Clusters –Publications using Purdue’s Wispy cited below NEES project exploring using Wispy to provision on- demand clusters for quick turn-around of wide parallel jobs With OSG team, using Wispy (and Steele) to run VMs for STAR project Working with faculty at Marquette Univ. to use Wispy in Fall 2010 course to teach cloud computing concepts TeraGrid Round Table, 5/20/2010 “CloudBLAST: Combining MapReduce and Virtualization on Distributed Resources for Bioinformatics Applications” by A. Matsunaga, M. Tsugawa and J. Fortes. eScience 2008. “Sky Computing”, by K. Keahey, A. Matsunaga, M. Tsugawa, J. Fortes, to appear in IEEE Internet Computing, September 2009
8
Clouded Computational Sciences Craig A. Struble Department of Mathematics, Statistics, and Computer Science Marquette University Fall 2010 semester A course covering topics related to science clouds and cloud architectures Students should be able to –Describe different cloud architectures –Identify prevalent cloud middleware –Address problems in computational sciences using a cloud –Understand security and ethical concerns surrounding science clouds Required to do –Instantiate a single VM serving a web page, etc. –Instantiate 4 node cluster of VMs, running Condor jobs. –Instantiate 4,8,16 nodes of VM clusters, running parallel, MPI based apps –Instantiate an 8-node Hadoop cluster to process a large text DB –… TeraGrid Round Table, 9/23/2010
9
Purdue – NCAR Collaboration Develop Earth Science Gateways Leverage NCAR’s work on ESG and ESG-curator (may have new names now) and Purdue’s Climate Model gateway (CCSM portal) Build around CCSM4, the new version Architectural changes – service oriented –CCSM4 web services Purdue migrating its CCSM3 gateway to also use the new web services Publish climate model data to ESG NCAR workflow application will use CCSM4 web service to run on TG TeraGrid Round Table, 9/23/2010
10
ExTENCI Joint project between OSG and TeraGrid Kickoff meeting: August 19 at Fermilab Distributed File System (Lustre-WAN) –CMS/ATLAS HEP –Ralph Roskies Workflow & Client Tools –SCEC & Protein Folding –Daniel S. Katz / Mike Wilde Job Submission Paradigms –Cactus Application –Shantenu Jha / Miron Livny Virtual Machines (Carol Song/Sebastien Goasguen) –STAR & CMS –Demonstrated: STAR cloud VM running at Purdue, joining their simulation cloud –discussed using Wispy's SOAP interface with a CMS group to let their Glidein-WMS infrastructure start resources on the cloud to join their Glidein pool. Plan to demonstrate this soon. TeraGrid Round Table, 9/23/2010
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.