1 Leiden, November 20-21, 2001OAC ASTROWISE TEAM OAC WP4: OAC BeoWulf Hardware choice criteria : physical space problem flexible configuration cope with.

Slides:



Advertisements
Similar presentations
IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Advertisements

Premio Server Product Training By Samuel Sanchez Desktop, Server, and Network Product Manager November 2000 (Click on the speaker icon on each slide to.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Where Do the 7 layers “fit”? Or, where is the dividing line between hdw & s/w? ? ?
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
ADVANCE FORENSIC WORKSTATION. SPECIFICATION Mother board : Xeon 5000 Series Server Board support 667MHz, 1066MHz and 1333MHz1 Processor : Two Intel Quad.
SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 Blue Line.
LIGO- XXXX Oct 2001 GriPhyN All HandsLIGO Scientific Collaboration - University of Wisconsin - Milwaukee 1 Medusa: a LIGO Scientific Collaboration Facility.
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
Day 4 Understanding Hardware Partitions Linux Boot Sequence.
Enterprise Computing With Aspects of Computer Architecture Jordan Harstad Technology Support Analyst Arizona State University.
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Confidential 1 SpecificationsFeatures ProcessorFreescale MPC8640 Single 1 GHz DDRAMDual channel DDR2 with ECC, 512 MB (expandable up to 2GB) Flash.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Beowulf Cluster Jon Green Jay Hutchinson Scott Hussey Mentor: Hongchi Shi.
Hosting on a managed server hosted by TAG  No technical support required  Full backup of database and files  RAID 5 system means that if a hard drive.
SoftPLC In TealwareTM SoftPLC ProcessorsTM Hardbook SoftPLC’s
© Copyright 2013 TONE SOFTWARE CORPORATION New Client Turn-Up Overview.
Making MINT64OS Chan Seok Kang 2013/01/21. 2Computer Systems and Platforms Lab Content Introduction My Approach Encountered Problem Conclusion & Future.
Sandor Acs 05/07/
Group I Renjith Deepesh Praveesh P Varun V Subramanian Halesh P K.
InstantGrid: A Framework for On- Demand Grid Point Construction R.S.C. Ho, K.K. Yin, D.C.M. Lee, D.H.F. Hung, C.L. Wang, and F.C.M. Lau Dept. of Computer.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Astro-WISE workshop, 31/03 – 03/ Ewout Helmich Image Stacking in Astro-WISE ➢ Why this talk? ➢ Infrared data: lots of short exposures to stack.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
APAN SIP SERVER Hosted at the APAN Tokyo XP Thanks to  Prof. Konishi for organizing this  Takatoshi Ikeda/ KDDI for mounting the server at APAN TokyoXP.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
1 Groningen, November 2003 ASTROWISE OAC TEAM The ASTRO-WISE project: status at OAC The OAC AW team: J.M. Alcalà, F. Getman, A. Grado, M. Pavlov,
Listing Cliques in Parallel Using a Beowulf Cluster Kaveh Moallemi, Dr. Gerald D. Zarnett, and Dr. Eric R. Harley. Department of Computer Science Ryerson.
Status 2005 Ian D. Thompson ©University of Strathclyde 2005.
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Gravitational N-body Simulation Major Design Goals -Efficiency -Versatility (ability to use different numerical methods) -Scalability Lesser Design Goals.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
The Internet (Gaming) Windows XP or later 1.7 GHz Intel or AMD Processor 512 MB of RAM DirectX 8.1 graphics card Sound card (These requirements are based.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Google Glass A Presentation. Content What is Google Glass? History How is it used Specifications.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Computer Systems Unit 2. Download the unit specification from moodle or the BTEC website Or alternatively visit ahmedictlecturer.wikispaces.com.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
By: Joel Dominic and Carroll Wongchote 4/18/2012.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Win-GRAF PAC Platform 2017/07/24.
NL Service Challenge Plans
Supervisor: Andreas Gellrich
OUHEP STATUS Hardware OUHEP0, 2x Athlon 1GHz, 2 GB, 800GB RAID
SAM at CCIN2P3 configuration issues
Hans Nilsson The AXC105 Fibre Switch Hans Nilsson
ASTRO-WISE workshop Nov 03
Win-GRAF PAC Platform 2018/07/01.
NetFPGA - an open network development platform
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Cluster Computers.
Presentation transcript:

1 Leiden, November 20-21, 2001OAC ASTROWISE TEAM OAC WP4: OAC BeoWulf Hardware choice criteria : physical space problem flexible configuration cope with the technology transition for CPU

2 Leiden, November 20-21, 2001OAC ASTROWISE TEAM OAC WP4: OAC BeoWulf BeoWulf arrived in Napoli on November 14th Configuration: 1 Master 8 slave nodes 1 Switch Slave nodes configuration:  cabinet 2U  MB Tyan TIGER 200T dual processor  CPU PIII 1GHz (can mount PIIIs)  RAM 512MB ECC registered  HD 40 GB IBM IDE  Network: 2 modules 100TBase

3 Leiden, November 20-21, 2001OAC ASTROWISE TEAM OAC WP4: OAC BeoWulf Master configuration: cabinet 4 U MB TYAN Thunder HE SL dual Processor CPU: two PIII 1 GHZ RAM: 1 GB ECC registered 2 HD 75 GB + 1 HD 18 GB scsi Gigabit ethernet module Switch: Allied Telesyn 24 ports 100TBase 1 port Gigabit ethernet Total cost: 14 KEU + tax

4 Leiden, November 20-21, 2001OAC ASTROWISE TEAM OAC WP4: OAC BeoWulf Software OS: LINUX 7.2 kernel Administration: OSCAR 1.1

5 Leiden, November 20-21, 2001OAC ASTROWISE TEAM OAC Some results with BeoWulf at ESO To create a Master Bias out of 5 raw WFI images using a special designed eclipse recipe: 68 sec To create a Master Flat-Field out of 5 raw dome-flats and 5 sky-flats: 390 sec To obtain a catalogue of identified standard stars from a raw Landolt field (i.e. including astrometric calibration): 140 sec The same for 1 CCD extension processed on one node: 88 sec