AMS02 Software and Hardware Evaluation A.Eline. Outline  AMS SOC  AMS POC  AMS Gateway Computer  AMS Servers  AMS ProductionNodes  AMS Backup Solution.

Slides:



Advertisements
Similar presentations
Netbus: A Transparent Mechanism for Remote Device Access in Virtualized Systems Sanjay Kumar PhD Student Advisor: Prof. Karsten Schwan.
Advertisements

Company Equipment Upgrade Proposal. The Current Situation  It has been five years since Alt-F4 Inc. has upgraded any of it’s equipment.  200 Computers.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
SQL Server on VMware Jonathan Kehayias (MCTS, MCITP) SQL Database Administrator Tampa, FL.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
David A. Lifka Chief Technical Officer Cornell Theory Center Cycle Scavenging with Windows The Virtues of Virtual Server David
Rates and Billing for New ITS Services Financial Unit Liaison Meeting February 16, 2011 Barry D. MacDougall Information Technology Service.
Introduction to DoC Private Cloud
Hyper-V Recovery Service DR Orchestration Extensible Data Channel (Hyper-V Replica, SQL AlwaysOn)
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Disk Array Performance Estimation AGH University of Science and Technology Department of Computer Science Jacek Marmuszewski Darin Nikołow, Marek Pogoda,
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Cluster Components Compute Server Disk Storage Image Server.
Dedicated Windows 2003 Servers Dedicated Windows 2003 Servers Application Server Application Server Database Server Database Server Web Server Web Server.
THE QUE GROUP WOULD LIKE TO THANK THE 2013 SPONSORS.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
ADVANCE FORENSIC WORKSTATION. SPECIFICATION Mother board : Xeon 5000 Series Server Board support 667MHz, 1066MHz and 1333MHz1 Processor : Two Intel Quad.
Technology Expectations in an Aeros Environment October 15, 2014.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
Fermilab Oct 17, 2005Database Services at LCG Tier sites - FNAL1 FNAL Site Update By Anil Kumar & Julie Trumbo CD/CSS/DSG FNAL LCG Database.
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
Sydney Region IT School Support Term Smaller Servers available on Contract.
Installing and Configuring IIS. Reliable IIS 6.0 uses a new request-processing architecture and application-isolation environment that enables individual.
REQUIREMENTS The Desktop Team Raphael Perez MVP: Enterprise Client Management, MCT RFL Systems Ltd
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
Objective  CEO of a small company  Create a small office network  $10,000 and $20,000 Budget  Three servers (workstations)  Firewall device  Switch.
Hosting on a managed server hosted by TAG  No technical support required  Full backup of database and files  RAID 5 system means that if a hard drive.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Sandor Acs 05/07/
Ariel Technical Services, Inc. CGNA Preferred Vendor: Technology.
Wright Technology Corp. Minh Duong Tina Mendoza Tina Mendoza Mark Rivera.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
APAN SIP SERVER Hosted at the APAN Tokyo XP Thanks to  Prof. Konishi for organizing this  Takatoshi Ikeda/ KDDI for mounting the server at APAN TokyoXP.
Our Small Company Jamel Callands, Austin Chaet and Carson Gallimore.
| See the possibilities… ePace Hardware Overview FUSION 08 Tom Dodge.
The DCS lab. Computer infrastructure Peter Chochula.
Challenges of deploying Wide-Area-Network Distributed Storage System under network and reliability constraints – A case study
Solution to help customers and partners accelerate their data.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
News from Alberto et al. Fibers document separated from the rest of the computing resources
Computational Research in the Battelle Center for Mathmatical medicine.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
LSST Cluster Chris Cribbs (NCSA). LSST Cluster Power edge 1855 / 1955 Power Edge 1855 (*LSST1 – LSST 4) –Duel Core Xeon 3.6GHz (*LSST1 2XDuel Core Xeon)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
A Scalable Distributed Datastore for BioImaging R. Cai, J. Curnutt, E. Gomez, G. Kaymaz, T. Kleffel, K. Schubert, J. Tafas {jcurnutt, egomez, keith,
PIC port d’informació científica DateText1 November 2009 (Elena Planas) PIC Site review.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
HP Proliant Server  Intel Xeon E3-1220v3 (3.1GHz / 4-core / 8MB / 80W).  HP 4GB Dual Rank x8 PC E (DDR3-1600) Unbuffered Memory Kit.  HP Ethernet.
Acer Altos T110 F3 E3-1220v3 4GB 500GB: 24,900 Baht
The PowerEdge T110 was developed with a purposeful design, energy-optimized technology, basic systems management, and the reliability you need. The PowerEdge.
Dirk Zimoch, EPICS Collaboration Meeting October SLS Beamline Networks and Data Storage.
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
SBS Alert Web Console Senior Design 3 – February 28, 2005 Debra Sweet Barrett.
Dell PowerEdge Tower Servers
Installation 1. Installation Sources
NEW Dell PowerEdge Tower Servers Dell PowerEdge Tower Servers
Virtualization OVERVIEW
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
ISAM 5338 Project Business Plan
NEW Dell PowerEdge Tower Servers Dell PowerEdge Tower Servers
Convergence /25/2019 © 2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered.
Presentation transcript:

AMS02 Software and Hardware Evaluation A.Eline

Outline  AMS SOC  AMS POC  AMS Gateway Computer  AMS Servers  AMS ProductionNodes  AMS Backup Solution  AMS02 Benchmarks  AMS Computing requires for 2008  Summary

AMS SOC configuration.  AMS Gateway Computer, HTTP and AFS server. (1)  AMS DataBase, Production Server (1)  AMS AFS/DataBase Backup Server (1)  File servers. (2)  Production nodes. (6/8)

AMS POC configuration  Data Relay computers for receiving, buffering, archiving and sending to remote centers online dataes.

Ams gateway  Dell PowerEdge 6800 server: - 4x2 Xeon 3.0 GHz processors, hyperthreading (16 virtual processors), - 2 TB disk space, raid5,scsi - 12 GB RAM, MB/sec R/W access to disks CHF

AMS File/AFS/DB/Production servers  Dell PowerEdge 2900 server. - 2xQuad Core Xeon E5345, 2.33 GHz processors, 1333 MHz FSB, TB disk space, - 4 GB RAM, MB/sec R/W access to disks CHF

2 candidates for AMS Production nodes:  1 st. Dell SC xQuad Core Xeon E5320, 1.86GHz processors, 1066MHx FSB, TB disk space, - 2 RAM, - R/W access – performance problem. (25MB/s R/W access) (under investigation) CHF  2 nd. Dell PE Equal configuration CHF

Backup Solution  Lacie Firewire 800 2TB External disks.  Price – 0.6CHF/MB,  Dual host Capability.  R/W access: File size >= 6GB - read 95 MB/s, write 75MB/s.

Ams02 benchmarks

AMS computing requires for 2008  2xFile Servers Dell PE2900 2xQuad Core Xeon 2.66GHz processors, 8x1000GB disks.  (6/8)xProduction nodes 2xQuad Core Xeon 1.86GHZ processors, 4x750GB disks.  2xDisk arrays – 2x15TB  2xData Relay computers Dell PE2900 Quad Core Xeon 2.33 GHz processor, 4x750GB disks

Summary  Most of Hardware identified.  Still pending: 1. Large disk arrays ~10TB evaluation (Dell/Apple)? - expected July this year, 2. Production nodes (SC1420/PE1900) – expected May this year.  Thanks to new 4 Core processors, no problems foreseen concerning offline reconstruction software performance.