NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector.

Slides:



Advertisements
Similar presentations
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Advertisements

Statewide IT Conference30-September-2011 HPC Cloud Penguin on David Hancock –
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
GCC Genomics Core Computing. Current situation GCC 1.0 Roche 454 Current cluster UZ network 8C 16Gb 2TB UZ NAS Storage 8C 16Gb Per run: ~ 1 Mio reads.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Dawei Lin, Ph.D. Director, Bioinformatics Core UC Davis Genome Center July 20, 2008, SLIMS (Solexa sequencing.
World’s Leading Provider of Turn-key Compute Solutions for NGS / Bioinformatics.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Lunarc history IBM /VF IBM S/VF Workstations, IBM RS/ – 1997 IBM SP2, 8 processors Origin.
UK -Tomato Chromosome Four Sarah Butcher Bioinformatics Support Service Centre For Bioinformatics Imperial College London
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
The ‘omics Data Tsunami » Non-stop revolution of high-throughput technology fuels unprecedented growth of ‘omics data » Researchers struggling with the.
Parallel Data Analysis from Multicore to Cloudy Grids Indiana University Geoffrey Fox, Xiaohong Qiu, Scott Beason, Seung-Hee.
The University of Texas Research Data Repository : “Corral” A Geographically Replicated Repository for Research Data Chris Jordan.
Digital Graphics and Computers. Hardware and Software Working with graphic images requires suitable hardware and software to produce the best results.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Dell Precision T1600 Sales Aid  What’s New? The Dell Precision T1600 is the next generation of our single-socket, entry-level T1500 workstation. Designed.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
Bioinformatics Core Facility Ernesto Lowy February 2012.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
A Makeshift HPC (Test) Cluster Hardware Selection Our goal was low-cost cycles in a configuration that can be easily expanded using heterogeneous processors.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Cluster Computing Applications for Bioinformatics Thurs., Aug. 9, 2007 Introduction to cluster computing Working with Linux operating systems Overview.
J OINT I NSTITUTE FOR N UCLEAR R ESEARCH OFF-LINE DATA PROCESSING GRID-SYSTEM MODELLING FOR NICA 1 Nechaevskiy A. Dubna, 2012.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, minute turn at the gate.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Bioinformatics Core Facility Guglielmo Roma January 2011.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Bio-Linux 3.0 An integrated bioinformatics solution for the EG community ClustalX showing DNA polymerase alignment GeneSpring showing yeast transcriptome.
PC hardware and x86 programming Lec 2 Jinyang Li.
Monitoring and Managing Server Performance. Server Monitoring To become familiar with the server’s performance – typical behavior Prevent problems before.
Computational Research in the Battelle Center for Mathmatical medicine.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
How are they called?.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Bio-IT World Conference and Expo ‘12, April 25, 2012 A Nation-Wide Area Networked File System for Very Large Scientific Data William K. Barnett, Ph.D.
Galaxy Community Conference July 27, 2012 The National Center for Genome Analysis Support and Galaxy William K. Barnett, Ph.D. (Director) Richard LeDuc,
Understanding Parallel Computers Parallel Processing EE 613.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
Computational Sciences at Indiana University an Overview Rob Quick IU Research Technologies HTC Manager.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Brief introduction about “Grid at LNS”
USF Health Informatics Institute (HII)
HII Technical Infrastructure
Shared Research Computing Policy Advisory Committee (SRCPAC)
Design Unit 26 Design a small or home office network
Providing an Research Environment for life sciences
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Advanced Computing Facility Introduction
Footer.
H2020 EU PROJECT | Topic SC1-DTH | GA:
Campus and Phoenix Resources
Introduction to research computing using Condor
Presentation transcript:

NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector NTI Advance®

SoftwareLicense type License number GalaxyOpen AccessUnlimited 1524 Partek FlowAnnual/Floating NextbioAnnual/SiteUnlimited BIOBASEAnnual/SiteUnlimited Oncomine*Annual/Named Golden Helix SVS 7**Annual/Floating GenevestigatorAnnual/Named Partek Genomics SuiteAnnual/Floating Ingenuity Pathway AnalysisAnnual/Floating Vector NTI AdvancePerpetual/Floating Total Annual Registered Users * Oncomine was downsized to 2 named users starting FY2014, user growth in 2014 represent the number of consultation requests ** Golden Helix SVS 7 was downsized from 1 floating license to 1 fixed license starting FY2014 and discontinued in FY2015 Cumulative Annual Registered Users of NML Licensed Commercial Bioinformatics Resources

$65,000 $1,870,000 Centralized Licensing Model Delivers Dramatic Cost Saving

Average monthly sessions = 314 Average monthly hours = 323 hr Ingenuity Pathway Analysis USC (Jan Dec.2014)

Average monthly denied sessions = 117 Ingenuity Pathway Analysis Monthly Denied Sessions (Jan Nov. 2014)

Bioinformatics Computing Resources at Norris Medical Library Computer Workstations in Norris Medical Library Hardware: Four Dell Precision T3550/T3660 workstations with 6-core Intel Xeon processors, GB memory Software: Partek Flow, Galaxy, Cistrome, annovar R& Bioconductor, CAP-miRSeq Total Storage: 62 TB (May 2015) Custom Computing Cluster in USC High- Performance Computing Hardware: One head node and 9 worker nodes*. Each with dual processor 6/8-core AMD Opteron 6176 or Intel E5-2650, GB memory. Software: Partek Flow, Galaxy, Cistrome, R& Bioconductor Total Storage: 4 TB You HPC file system File transfer Log on HPC HPC-NML custom condo Submit jobs HPC-NML head node

2014 Next Generation Sequencing (NGS) Data Analysis at NML Bioinformatics Service Galaxy FromTo# Users# Projects# Data Files Total Raw File Size (GB) Estimated Run Time (day) 1/1/201412/31/ Partek Flow FromTo# Users# Projects# Data Files Total Raw File Size (GB) Total Run Time (day) 1/1/201412/31/

NGS Growth in Biomedical Research

Computing Storage Growth at NML Bioinformatics Service