GRAPE 현황. Hardware Configuration Master –2 x Dual Core Xeon 5130 (2 GHz) –2 GB memory –500 GB hard disk 4 Slaves –2 x Dual Core Xeon 5130 (2 GHz) –1 GB.

Slides:



Advertisements
Similar presentations
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Advertisements

Copyright GeneGo CONFIDENTIAL »« MetaCore TM (System requirements and installation) Systems Biology for Drug Discovery.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Introduction to the GRAPE. N-body : direct integration of the equations of motion Fokker-Planck : direct integration of the Fokker-Planck equation Monte-Carlo.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
Information Technology Center Introduction to High Performance Computing at KFUPM.
LANs made simple. 2 network devices connected to share resources.
Tamnun Hardware. Tamnun Cluster inventory – system Login node (Intel 2 E GB ) – user login – PBS – compilations, – YP master Admin.
28. Januar, Zürich-Oerlikon. Switch/Update to Team Foundation Server 2012 André Hofmann Software Engineer bbv Software Services AG.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
MapReduce Simplified Data Processing On large Clusters Jeffery Dean and Sanjay Ghemawat.
How to Install Windows 7.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
THE QUE GROUP WOULD LIKE TO THANK THE 2013 SPONSORS.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Technology Expectations in an Aeros Environment October 15, 2014.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Technical Details – SAN PHARMA SFA. Front End / Back End Details  ASP  ASP.net  XML  JAVA Script  DHTML  MS SQL SERVER.
Sobolev Showcase Computational Mathematics and Imaging Lab.
3. April 2006Bernd Panzer-Steindel, CERN/IT1 HEPIX 2006 CPU technology session some ‘random walk’
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Hosting on a managed server hosted by TAG  No technical support required  Full backup of database and files  RAID 5 system means that if a hard drive.
Sensitivity of Cluster File System Access to I/O Server Selection A. Apon, P. Wolinski, and G. Amerson University of Arkansas.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Amy Apon, Pawel Wolinski, Dennis Reed Greg Amerson, Prathima Gorjala University of Arkansas Commercial Applications of High Performance Computing Massive.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
1 Lattice QCD Clusters Amitoj Singh Fermi National Accelerator Laboratory.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Minimalist’s Linux Cluster Changyoung Choi, Jeonghyun Kim, Seyong Kim Department of Physics Sejong University.
Weekly Report By: Devin Trejo Week of June 21, 2015-> June 28, 2015.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
Computational Research in the Battelle Center for Mathmatical medicine.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Gravitational N-body Simulation Major Design Goals -Efficiency -Versatility (ability to use different numerical methods) -Scalability Lesser Design Goals.
3.0:  Provides better speed.  SS, 4.8 Gbps  Allows better power efficiency with less power for idle states. Can power more devices from one hub. 2.0:
Issues on the operational cluster 1 Up to 4.4x times variation of the execution time on 169 cores Using -O2 optimization flag Using IBM MPI without efficient.
A Testbed for Study of Thermal and Energy Dynamics in Server Clusters Shen Li, Fan Yang, Tarek Abdelzaher University of Illinois at Urbana Champaign.
Computer Hardware – Part 2 Basic Components V.T. Raja, Ph.D., Information Management Oregon State University.
LSST Cluster Chris Cribbs (NCSA). LSST Cluster Power edge 1855 / 1955 Power Edge 1855 (*LSST1 – LSST 4) –Duel Core Xeon 3.6GHz (*LSST1 2XDuel Core Xeon)
Running clusters on a Shoestring Fermilab SC 2007.
1 Working of Local Area Network ( LAN ). 2 A Local Area Network (LAN) is best defined in terms of the purpose it is meant to serve rather then in terms.
29/04/2008ALICE-FAIR Computing Meeting1 Resulting Figures of Performance Tests on I/O Intensive ALICE Analysis Jobs.
IPPP Grid Cluster Phil Roffe David Ambrose-Griffith.
Brief introduction about “Grid at LNS”
Leading Client. Leading Client Software Solution Banking Software Core Banking Software Cheque Personalization Software Queue Management Software CTS.
Installation 1. Installation Sources
Solid State Disks Testing with PROOF
Comparing dual- and quad-core performance
SAP HANA Cost-optimized Hardware for Non-Production
SAP HANA Cost-optimized Hardware for Non-Production
Laptops and Processes Modern laptops are multicore
Presentation transcript:

GRAPE 현황

Hardware Configuration Master –2 x Dual Core Xeon 5130 (2 GHz) –2 GB memory –500 GB hard disk 4 Slaves –2 x Dual Core Xeon 5130 (2 GHz) –1 GB memory –80 GB hard disk GRAPE6-BLX Boards –Each has a particle memory size of 260K. –Each has peak performance of ~ 130 Gflops. Giga Bit Switching Hub & UPS

Softeware Configuration Cluster Environment –Redhat Enterprise WS –MPICH, MPICH2 –PBS Queue service –NFS (Network File System) –NIS (Network Information Service) in preparation Almost no security measures –Frequent backup suggested

Speed Test ParticleProcessor Wallclock Time (s) 110, , , , –Compound Galaxy (Halo, Disk & Bulge) –Makino’s parallel tree code for GRAPE –~ 1 Gyr –Error in energy less than 0.1% –Almost no difference when using 100 MBps hub