Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nimbus Update March 2010 OSG All Hands Meeting Kate Keahey Nimbus Project University of Chicago Argonne National Laboratory.

Similar presentations


Presentation on theme: "Nimbus Update March 2010 OSG All Hands Meeting Kate Keahey Nimbus Project University of Chicago Argonne National Laboratory."— Presentation transcript:

1 Nimbus Update March 2010 OSG All Hands Meeting Kate Keahey keahey@mcs.anl.gov Nimbus Project University of Chicago Argonne National Laboratory

2 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org Nimbus: Cloud Computing for Science l Allow providers to build clouds u Workspace Service: a service providing EC2-like functionality u WSRF-style and EC2-style (both WS and REST) interfaces u Support for Xen and KVM l Allow users to use cloud computing u Do whatever it takes to enable scientists to use IaaS u Context Broker: turnkey virtual clusters u Currently investigating scaling tools l Allow developers to experiment with Nimbus u For research or usability/performance improvements u Open source, extensible software u Community extensions and contributions: UVIC (monitoring), IU (EBS, research), Technical University of Vienna (privacy, research)

3 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node VWS Service The Workspace Service

4 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org The Workspace Service Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node The workspace service publishes information about each workspace Users can find out information about their workspace (e.g. what IP the workspace was bound to) Users can interact directly with their workspaces the same way the would with a physical machine. VWS Service

5 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org MPI Turnkey Virtual Clusters l Turnkey, tightly-coupled cluster u Shared trust/security context u Shared configuration/context information l Context Broker goals u Every appliance u Every cloud provider u Multiple distributed cloud providers l Used to contextualize 100s of virtual nodes for EC2 HEP STAR runs, Hadoop nodes, HEP Alice nodes… l Working with rPath on developing appliances, standardization IP1HK1IP2HK2IP3HK3

6 Nimbus Trivia l Nimbus: www.nimbusproject.orgwww.nimbusproject.org l Science Clouds: www.scienceclouds.org l GitHub for code management l Latest release is 2.4 (02/02/10) 6/21/2016 The Nimbus Toolkit: www.nimbusproject.org

7 Science in the Clouds: Update on Applications Working with Nimbus

8 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org STAR experiment l STAR: a nuclear physics experiment at Brookhaven National Laboratory l Studies fundamental properties of nuclear matter l Problems: u Complexity u Consistency u Availability Work by Jerome Lauret, Leve Hajdu, Lidia Didenko (BNL), Doug Olson (LBNL)

9 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org STAR Virtual Clusters l Virtual resources u A virtual OSG STAR cluster: OSG headnode (gridmapfiles, host certificates, NFS, Torque), worker nodes: SL4 + STAR u One-click virtual cluster deployment via Nimbus Context Broker l From Science Clouds to EC2 runs l Running production codes since 2007 l The Quark Matter run: producing just-in-time results for a conference: http://www.isgtw.org/?pid=1001735http://www.isgtw.org/?pid=1001735

10 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org Priceless? l Compute costs: $ 5,630.30 u 300+ nodes over ~10 days, u Instances, 32-bit, 1.7 GB memory: l EC2 default: 1 EC2 CPU unit l High-CPU Medium Instances: 5 EC2 CPU units (2 cores) u ~36,000 compute hours total l Data transfer costs: $ 136.38 u Small I/O needs : moved <1TB of data over duration l Storage costs: $ 4.69 u Images only, all data transferred at run-time l Producing the result before the deadline… …$ 5,771.37

11 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org Modeling the Progression of Epidemics l Can we use clouds to acquire on-demand resources for modeling the progression of epidemics? l What is the efficiency of simulations in the cloud? u Compare execution on: l a physical machine l 10 VMs on the cloud l The Nimbus cloud only u 2.5 hrs versus 17 minutes u Speedup = 8.81 u 9 times faster Work by Ron Price and others, Public Health Informatics, University of Utah

12 Ocean Observatory Initiative OOI CI LCO Review, Feb 2010 12 - Highly Available Services - Rapidly provision resources - Scale to demand

13 OOI Architecture OOI CI LCO Review, Feb 2010 13 EPU EPU Worker EPU Worker HA Service (OOI Application) VM (Deployable Unit) VM (Deployable Unit) VM Application Software

14 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org l CHEP 2009 paper, Harutyunyan et al., Collab. with CernVM l CCGrid 2010 paper, Marshall et al., “Elastic Sites” Elastically Provisioned Resources

15 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org Hadoop in the Science Clouds l Papers: u “CloudBLAST: Combining MapReduce and Virtualization on Distributed Resources for Bioinformatics Applications” by A. Matsunaga, M. Tsugawa and J. Fortes. eScience 2008. u “Sky Computing”, by K. Keahey, A. Matsunaga, M. Tsugawa, J. Fortes, to appear in IEEE Internet Computing, September 2009 U of FloridaU of Chicago Purdue Hadoop cloud

16 l Arterial Tree Models u The NECTAR project u Simulating the blood flow in ALL major arteries of the human. u Running across TeraGrid l Investigating cloud computing u Environments u Network u Efficiency Work by Karnidakis&Sherwin and Dong&Karonis (Brown, Imperial College, NIU) Keeping the Blood Flowing

17 l FutureGrid: experimental testbed l Infrastructure-as-a-Service for FutureGrid u Dynamic cloud deployment u Improved accounting, administration, user experience l Continue to provide IaaS cycles for science u Encourage new and experimental uses of the infrastructure u Continue reaching out to scientists in the US and collaborators abroad l Develop new capabilities u Automate building of “sky computing” scenarios u Provide additional tools on the cloud federation level Nimbus for FutureGrid

18 6/21/2016 The Nimbus Toolkit: http//workspace.globus.org Nimbus: Friends and Family l Nimbus core team: u UC/ANL: Kate Keahey, Tim Freeman, David LaBissoniere, John Bresnahan u UVIC: Ian Gable & team: l Patrick Armstrong, Adam Bishop, Mike Paterson, Duncan Penfold-Brown u UCSD: Alex Clemesha l Contributors: u http://www.nimbusproject.org/about/people/ l Other efforts: u ViNe: Mauricio Tsugawa, Jose Fortes (UFL)


Download ppt "Nimbus Update March 2010 OSG All Hands Meeting Kate Keahey Nimbus Project University of Chicago Argonne National Laboratory."

Similar presentations


Ads by Google