University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School.

Slides:



Advertisements
Similar presentations
Building a CFD Grid Over ThaiGrid Infrastructure Putchong Uthayopas, Ph.D Department of Computer Engineering, Faculty of Engineering, Kasetsart University,
Advertisements

Surgical Planning Laboratory Brigham and Women’s Hospital Boston, Massachusetts USA a teaching affiliate of Harvard Medical School User Interfaces and.
An Analysis of ASPECT Mantle Convection Simulator Performance and Benchmark Comparisons Eric M. Heien [1], Timo Heister [2], Wolfgang Bangerth [2], Louise.
SAN DIEGO SUPERCOMPUTER CENTER Blue Gene for Protein Structure Prediction (Predicting CASP Targets in Record Time) Ross C. Walker.
Universität Zürich San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School SIAM PP06 – San.
Cloud Computing Resource provisioning Keke Chen. Outline  For Web applications statistical Learning and automatic control for datacenters  For data.
Scheduling of parallel jobs in a heterogeneous grid environment Scheduling of parallel jobs in a heterogeneous grid environment Each site has a homogeneous.
©2005 Surgical Planning Laboratory, ARR Slide 1 U41: Research resource for Image guided therapy PI: Ferenc Jolesz.
Chapter 17 Design Analysis using Inventor Stress Analysis Module
Hierarchical Multi-Resolution Finite Element Model for Soft Body Simulation Matthieu Nesme, François Faure, Yohan Payan 2 nd Workshop on Computer Assisted.
Introduction Fast and reliable methods for predicting and monitoring in-vivo bone strength are of great importance for hip joint replacement. To avoid.
Chronopolis: Preserving Our Digital Heritage David Minor UC San Diego San Diego Supercomputer Center.
Computers in Medicine: Computer-Assisted Surgery Medical Robotics Medical Image Processing Spring 2002 Prof. Leo Joskowicz School of Computer Science and.
University of California, San Diego Structural Engineering SAN DIEGO SUPERCOMPUTER CENTER University of Zürich, Switzerland Image Guided Therapy Program.
Advancing Computational Science Research for Accelerator Design and Optimization Accelerator Science and Technology - SLAC, LBNL, LLNL, SNL, UT Austin,
Simo Niskala Teemu Pasanen
Slicer IGT and Open IGT Link
Tomographic mammography parallelization Juemin Zhang (NU) Tao Wu (MGH) Waleed Meleis (NU) David Kaeli (NU)
Surgical Planning Laboratory Brigham and Women’s Hospital Boston, Massachusetts USA a teaching affiliate of Harvard Medical School Visit of Barbara Alving,
Slide 1 Image Guided Surgery. Slide 2 Conventional Surgery: Seeing surfaces Provided by Nakajima, Atsumi et al.
Collaborations and Architectures mBIRN Progress at BWH.
Computer Assisted Surgery and Medical Image Analysis Computer Vision Group Artificial Intelligence Laboratory Massachusetts Institute of Technology Surgical.
Simulation Technology & Applied Research, Inc N. Port Washington Rd., Suite 201, Mequon, WI P:
Research on Analysis and Physical Synthesis Chung-Kuan Cheng CSE Department UC San Diego
SAN DIEGO SUPERCOMPUTER CENTER HDF5/SRB Integration August 28, 2006 Mike Wan SRB, SDSC Peter Cao
Surgical Planning Laboratory Brigham and Women’s Hospital Boston, Massachusetts USA a teaching affiliate of Harvard Medical School Neurosurgery Alexandra.
DTU Medical Visionday May 27, 2009 Generative models for automated brain MRI segmentation Koen Van Leemput Athinoula A. Martinos Center for Biomedical.
May 6 th 2011 Grade 12 Applied Math Research Project By James Wigley.
/ ZZ88 Performance of Parallel Neuronal Models on Triton Cluster Anita Bandrowski, Prithvi Sundararaman, Subhashini Sivagnanam, Kenneth Yoshimoto,
Haptics and Virtual Reality
Introduction Surgical training environments as well as pre- and intra-operative planning environments require physics-based simulation systems to achieve.
A Survey of Distributed Task Schedulers Kei Takahashi (M1)
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
Accelerating Scientific Exploration Using Workflow Automation Systems Terence Critchlow (LLNL) Ilkay Altintas (SDSC) Scott Klasky(ORNL) Mladen Vouk (NCSU)
San Diego Supercomputer Center National Partnership for Advanced Computational Infrastructure San Diego Supercomputer Center National Partnership for Advanced.
Petr Krysl* Eitan Grinspun, Peter Schröder Hierarchical Finite Element Mesh Refinement *Structural Engineering Department, University of California, San.
Combining the strengths of UMIST and The Victoria University of Manchester Adaptive Workflow Processing and Execution in Pegasus Kevin Lee School of Computer.
Surgical Planning Laboratory Brigham and Women’s Hospital Boston, Massachusetts USA a teaching affiliate of Harvard Medical School Functional Data Analysis.
*Partially funded by the Austrian Grid Project (BMBWK GZ 4003/2-VI/4c/2004) Making the Best of Your Data - Offloading Visualization Tasks onto the Grid.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
George Goulas, Christos Gogos, Panayiotis Alefragis, Efthymios Housos Computer Systems Laboratory, Electrical & Computer Engineering Dept., University.
SAN DIEGO SUPERCOMPUTER CENTER Inca Control Infrastructure Shava Smallen Inca Workshop September 4, 2008.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
GPU Accelerated MRI Reconstruction Professor Kevin Skadron Computer Science, School of Engineering and Applied Science University of Virginia, Charlottesville,
EiEi New Approach for Efficient Prediction of Brain Deformation and Updating of Preoperative Images Based on the Extended Finite Element Method Lara M.
Morphometry BIRN Semi-Automated Shape Analysis (SASHA) JHU (CIS): M. F. Beg, C. Ceritoglu, A. Kolasny, M. I. Miller, R. Yashinski MGH (NMR): B. Fischl;
©2005 Surgical Planning Laboratory, ARR Slide 1 Prostate Image Processing Steven Haker, PhD.
Distributed Data for Science Workflows Data Architecture Progress Report December 2008.
Skuller: A volumetric shape registration algorithm for modeling skull deformities Yusuf Sahillioğlu 1 and Ladislav Kavan 2 Medical Image Analysis 2015.
M. Zareinejad
HPC HPC-5 Systems Integration High Performance Computing 1 Application Resilience: Making Progress in Spite of Failure Nathan A. DeBardeleben and John.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Advanced User Support for MPCUGLES code at University of Minnesota October 09,
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
Collaboration with Craig Henriquez’ laboratory at Duke University Multi-scale Electro- physiological Modeling.
National Aeronautics and Space Administration Jet Propulsion Laboratory March 17, 2009 Workflow Orchestration: Conducting Science Efficiently on the Grid.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Rigid Needles, Steerable Needles, and Optimal Beam Algorithms Ovidiu Daescu Bio-Medical Computing Laboratory Department of Computer Science University.
The past and future of virtual reality simulation in neurologic surgery Longwei F
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Ron Kikinis, M.D ‡. Wanmei Ou, MSc §, Polina Golland, Ph.D. §, William Wells III, Ph.D. ‡§, Carsten Richter ‡, Steven Pieper, Ph.D. ¥, Haiying Liu ‡, Wendy.
Seismic Hazard Analysis Using Distributed Workflows
Roberto Barbera (a nome di Livia Torterolo)
Computer Assisted Surgery
A Characterization of Approaches to Parrallel Job Scheduling
ANALYSIS OF USER SUBMISSION BEHAVIOR ON HPC AND HTC
rvGAHP – Push-Based Job Submission Using Reverse SSH Connections
From Use Cases to Implementation
Presentation transcript:

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 A Dynamic Data Driven Grid System for Intra-operative Image Guided Neurosurgery A Majumdar 1, A Birnbaum 1, D Choi 1, A Trivedi 2, S. K. Warfield 3, K. Baldridge 1, and Petr Krysl 2 1 San Diego Supercomputer Center University of California San Diego 2 Structural Engineering Dept University of California San Diego 3 Computational Radiology Lab Brigham and Women’s Hospital Harvard Medical School Grants: NSF: ITR , ; NIH:P41 RR13218, P01 CA67165, LM , I3 grant (IBM)

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 TALK SECTIONS 1.PROBLEM DESCRIPTION AND DDDAS 2.GRID ARCHITECTURE 3.ADVANCED BIOMECHANICAL MODEL 4.PARALLEL AND END-to-END TIMING 5.SUMMARY

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS PROBLEM DESCRIPTION AND DDDAS

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Neurosurgery Challenge Challenges : Remove as much tumor tissue as possible Minimize the removal of healthy tissue Avoid the disruption of critical anatomical structures Know when to stop the resection process Compounded by the intra-operative brain shape deformation that happens as a result of the surgical process – preoperative plan diminishes Important to be able to quantify and correct for these deformations while surgery is in progress by dynamically updating pre-operative images in a way that allows surgeons to react to these changing conditions The simulation pipeline must meet the real-time constraints of neurosurgery – provide images approx. once/hour within few minutes during surgery lasting 6 to 8 hours

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Intraoperative MRI Scanner at BWH

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Brain Shape Deformation Before surgery After surgery

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Overall Process Before image guided neurosurgery During image guided neurosurgery Segmentation and Visualization Preoperative Planning of Surgical Trajectory Preoperative Data Acquisition Preoperative data Intraoperative MRI SegmentationRegistration Surface matching Solve biomechanical Model for volumetric deformation Visualization Surgical process

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Timing During Surgery Time (min) Before surgery During surgery Preop segmentation Intraop MRI Segmentation Registration Surface displacement Biomechanical simulation Visualization Surgical progress

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Current Prototype DDDAS Inside Hospital Pre and Intra-op 3D MRI (once/hr) Pre and Intra-op 3D MRI (once/hr) Local computer at BWH Crude linear elastic FEM solution Merge pre and intra-op viz Intra-op surgical decision and steer Segmentation, Registration, Surface Matching for BC Once every hour or two for a 6 or 8 hour surgery

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Current Prototype DDDAS System Receives 3-D MRI from operating room once/hour or so Uses displacement of known surface points as BC to solve a crude linear elastic biomechanical FEM material model on compute system located at BWH This crude inaccurate model is solvable within the time constraint of few minutes once an hour on local computers at BWH Dynamically updates pre-op images with biomechanical volumetric simulation based intra-op images Time critical updates shown to surgeons for intra-op surgical navigation

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Two Research Aspects Grid Architecture – grid scheduling, on demand remote access to multi-teraflop machines, data transfer Data transfer from BWH to SDSC, solution of detail advanced biomechanical model, transfer of results back to BWH for visualization need to be performed in a few minutes Development of detailed advanced non-linear scalable viscoelastic biomechanical model To capture detail intraoperative brain deformation

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Example of visualization: Intra-op Brain Tumor with Pre-op fMRI

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS GRID ARCHITECTURE

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Queue Delay Experiment on TeraGrid Cluster TeraGrid is a NSF funded grid infrastructure across multiple research and academic sites Queue delays at SDSC and NCSA TG were measured over 3 days for 5 mins wall clock time on 2 to 64 CPUs Single job submitted at a time If job didn’t start within 10 mins, job terminated, next one processed What is the likelihood of job running 313 jobs to NCSA TG cluster and 332 to SDSC TG cluster – 50 to 56 jobs of each size on each cluster

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 % of submitted tasks that run, as a fn of CPUs requested

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Average queue delay for tasks that began running within10 mins

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Queue Delay Test Conclusion There appears to be a direct relationship between the size of request and the length of the queue delay Two clusters exhibit different performance profiles This behavior of queue systems clearly merits further study More rigorous statistical characterization ongoig on much larger data sets

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Data Transfer We are investigating grid based data transfer mechanisms such as globus-url-copy, SRB All hospitals have firewalls for security and patient data privacy – single port of entry to internal machines Transfer direction Globus-url- copy SRBScpScp –C TG to BWH BWH to TG Transfer time in seconds for 20 MB file

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS ADVANCED BIOMECHANICAL MODEL

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Mesh Model with Brain Segmentation

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Current and New Biomechanical Model Current linear elastic material model – RTBM Advanced model under development - FAMULS Advanced model is based on conforming adaptive refinement method – FAMULS package (AMR) Inspired by the theory of wavelets this refinement produces globally compatible meshes by construction First task is to replicate the linear elastic result produced by the RTBM code using FAMULS

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 FEM Mesh : FAMULS & RTBM RTBM (Uniform) FAMULS (AMR)

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Deformation Simulation After Cut No – AMR FAMULS 3 level AMR FAMULS RTBM

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Advanced Biomechanical Model The current solver is based on small strain isotropic elastic principle The new biomechanical model will be inhomogeneous scalable non-linear viscoelastic model with AMR We also want to increase resolution close to the level of MRI voxels i.e. millions of FEM meshes Since this complex model still has to meet the real time constraint of neurosurgery it requires fast access to remote multi-tflop systems

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS PARALLEL AND END-to-END TIMING

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Parallel Registration Performance

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Parallel Rendering Performance

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Parallel RTBM Performance (43584 meshes, tetrahedral elements) # of CPUs Elapsed Time (sec) IBM Power3 IA64 TeraGrid IBM Power4

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 End to End (BWH  SDSC  BWH) Timing RTBM – not during surgery Rendering - during Surgery

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 End-to-end Timing of RTBM Timing of transferring ~20 MB files from BWH to SDSC, running simulations on 16 nodes (32 procs), transferring files back to BWH = 9* + (60** + 7***) + 50* = 124 sec. This shows that the grid infrastructure can provide biomechanical brain deformation simulation solutions (using the linear elastic model) to surgery rooms at BWH within ~ 2 mins using TG machines This satisfies the tight time constraint set by the neurosurgeons

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 End-to-end Timing of Rendering MRI data from BWH was transferred to SDSC during a surgery Parallel rendering was performed at SDSC Rendered viz was sent back to BWH (but not shown to surgeons) Total time (for two sets of data) in sec = 2*53 (BWH to SDSC) + 2* 7.4 (render on 32 procs) (overlapping viz) (SDSC to BWH) = sec DURING SURGERY

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS SUMMARY

University of California, San Diego San Diego Supercomputer Center Computational Radiology Laboratory Brigham & Women’s Hospital, Harvard Medical School ICCS2005 Ongoing and Future DDDAS Research Continuing research and development in grid architecture, on demand computing, data transfer Continuing development of advanced biomechanical model and parallel algorithm Moving towards near-continuous DDDAS instead of once an hour or so 3-D MRI based DDDAS Scanner at BWH can provide one 2-D slice every 3 sec or three orthogonal 2-D slices every 6 sec Near-continuous DDDAS architecture Requires major research, development and implementation work in the biomechanical application domain Requires research in the closed loop system of dynamic image driven continuous biomechanical simulation and 3-D volumetric FEM results based surgical navigation and steering