Working Group Meeting (McGrew/Toki) Discussion Items (focused on R&D software section) Goals of Simulations Simulation detector models to try Simulation.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 11: Monitoring Server Performance.
Teraserver Darrel Sharpe Matt Todd Rob Neff Mentor: Dr. Palaniappan.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
HEPIX - Spring 2015 Tony Wong (BNL).  Yearly purchase cycle of hardware for RACF timed with U.S. gov’t fiscal year (October to September)  Aim for delivery.
Paper on Best implemented scientific concept for E-Governance Virtual Machine By Nitin V. Choudhari, DIO,NIC,Akola By Nitin V. Choudhari, DIO,NIC,Akola.
F1031 COMPUTER HARDWARE CLASSES OF COMPUTER. Classes of computer Mainframe Minicomputer Microcomputer Portable is a high-performance computer used for.
Digital Graphics and Computers. Hardware and Software Working with graphic images requires suitable hardware and software to produce the best results.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Paper on Best implemented scientific concept for E-Governance projects Virtual Machine By Nitin V. Choudhari, DIO,NIC,Akola.
Your Interactive Guide to the Digital World Discovering Computers 2012.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Guide to Linux Installation and Administration, 2e1 Chapter 3 Installing Linux.
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
Your Interactive Guide to the Digital World Discovering Computers 2012 Lecture -1.
E906/Drell-Yan: Monte Carlo, Data Acquisition and Data Analysis Paul E. Reimer Expected Rates and Monte Carlo (WBS 2.5.1) Data Acquisition (WBS 2.4) –CODA.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 11: Monitoring Server Performance.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Simulations of the double funnel construction for LET. Comparison with a single funnel The aim was to optimise the double funnel configuration to give.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Long Baseline Experiments at Fermilab Maury Goodman.
D0 Taking Stock 11/ /2005 Calibration Database Servers.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 11: Monitoring Server Performance.
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
RAL Site report John Gordon ITD October 1999
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Welcome to the PRECIS training workshop
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
5 June 2003Alan Norton / Focus / EP Topics1 Other EP Topics Some 2003 Running Experiments - NA48/2 (Flavio Marchetto) - Compass (Benigno Gobbo) - NA60.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
2 July 2002 S. Kahn BNL Homestake Long Baseline1 A Super-Neutrino Beam from BNL to Homestake Steve Kahn For the BNL-Homestake Collaboration Presented at.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
More material for P5 Milind Diwan for the Homestake neutrino detector group 3/1/2008.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Chapter 1: Computer Basics Instructor:. Chapter 1: Computer Basics Learning Objectives: Understand the purpose and elements of information systems Recognize.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
 Input - A device, such as a keyboard, used to enter information into a computer  Output - Electronic or electromechanical equipment connected to.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
DIT314 ~ Client Operating System & Administration
Virtualization OVERVIEW
Discovering Computers 2011: Living in a Digital World Chapter 4
UK GridPP Tier-1/A Centre at CLRC
Example of DAQ Trigger issues for the SoLID experiment
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

Working Group Meeting (McGrew/Toki) Discussion Items (focused on R&D software section) Goals of Simulations Simulation detector models to try Simulation cpu requirements & datafile size Software Modules Software Infrastructure Computing Hardware Manpower Proposed Costs for R&D proposal Presentations; Jungdoo; UNO software at CSU Chiaki; Physics analysis,  finding & CP sensitivity Clark; UNO software, the analysis chain SATURDAY meeting

Friday Afternoon Discussion Clark, Brett, Dhiwan, Aidong, Walter (Chiaki, Jungdoo) biweekly 1 hour phone meetings develop/implement software using descriptions of analysis chain from experts focus on serious analysis of VLBL measurement with UNO baseline detector and BNL beam Develop schedule in few weeks plan for progress reports in future UNO meetings target VLBL results for PRD in ~1 year => Seeking interested UNO collaborators

Determine sensitivity of UNO detector in Proton decay in p →   e and p  K. Very Long Baseline measurements of sin 2  13 and  cp using  disappearance and e appearance (VLBL àla BNL) R&D results for the UNO detector proposal 2 nd aim for Phys.Rev. D. paper on VLBL Main Goals of Simulation Effort

Detector Models to be Simulated Main baseline UNO (60x60x180m, 20” & 8’’ PMTs, etc.) start with baseline design and determine sensitivity  Optimize sensitivity and minimize cost Parameters to vary detector shape and dimensions no. of PMTs size of PMTs reflectivity of material water absorption length water scattering length dead space inner-outer section electronics (timing and adc resolution) hadronic modelling & parametrizations?  Above parameters would require a separate simulation run Detector Models to be Simulated

Other simulation parameters to be varied L 0, distance between UNO and BNL (2767km) , earth density (~3gm/cm 3 ) beam parameters (energy distribution, bkgds) overburden (4200mwe depth of UNO) Above parameters can be achieved with Reweighting incident event sample Other Issues detailed background studies (   backgrounds) how sensitivity is limited to background uncertainties detector calibration issues (or limitations) Figure of Merit (specifying sensitivity of measurements) what type of statistics? Comparisons to competing expts, etc. off-axis?, Nova?

MANPOWER Programmer/Physics support person at central site HEP physics background experienced C++ programmer, LINUX administrator support person accessible by offsite users Graduate student support students who finish coursework, but have not joined their thesis experiment are ideal candidates 12 month support for UNO software development Hourly consultant support for LINUX hardware and setup some success at CSU with hourly support ($75/hour) may be possible to share with multiple sites (remote login) Guidance from senior physicists (no cost) help direct & guide students at local sites

Equipment Costs*; example of hardware costs COMPUTE CLIENTS AMD, Athlon 64, 2.0Ghz, 512kb cache, Abit motherboard, 1GB DDR PC3200 Crucial memory, 200GB harddisk, case and power supply; $710 DISK SERVER; 2.4TB Raid5 3Ware Raid5 card, Escalade, 12 channel, dual Athlon 2.8Ghz, Tyan mother-board, 2GB DDR registered Crucial memory, 13 – 200GB, 7200rpm EIDE drives case and power supply; $4000 Network switch Dell Gigabit Power Connect, 24 channel; $350 *Prices from October 2004www.pricewatch.com  Need more discussions & guidance from Brett

PROPOSED COSTS for R&D Proposal Linux/Programmer/Physics expert: $70K cost per 12 month graduate student: $27K (w/overhead) cost per 20 hour/month for 12 months of hardware hourly consultant support; $18K cost per farm site (20 cpu’s & 2.5TB & network) $20K Fully loaded LINUX desktops for code development ~$1K Five page first draft of software R&D section is written in LATEX; Needs several iterations and consistency checks with other sections of R&D proposal