TechFair ‘05 University of Arlington November 16, 2005.

Slides:



Advertisements
Similar presentations
Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
Advertisements

Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
DOSAR SURA Cyberinfrastructure Workshop: "Grid Application Planning & Implementation" Texas Advanced Computing Center (TACC) University of Texas at Austin.
Particle Physics and the Grid Randall Sobie Institute of Particle Physics University of Victoria Motivation Computing challenge LHC Grid Canadian requirements.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
U.T. Arlington High Energy Physics Research Summary of Activities August 1, 2001.
DØSAR, State of the Organization Jae Yu DOSAR, Its State of Organization 7th DØSAR (3 rd DOSAR) Workshop University of Oklahoma Sept. 21 – 22, 2006 Jae.
DOSAR Workshop Sept , 2007 J. Cochran 1 The State of DOSAR Outline What Exactly is DOSAR (for the new folks) Brief History Goals, Accomplishments,
High Energy Physics at UTA UTA faculty Andrew Brandt, Kaushik De, Andrew White, Jae Yu along with many post-docs, graduate and undergraduate students investigate.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
K. De UTA Grid Workshop April 2002 U.S. ATLAS Grid Testbed Workshop at UTA Introduction and Goals Kaushik De University of Texas at Arlington.
Thu, Oct 8th 2009OSCER Symposium 2009; Grid Computing 101; Karthik Arunachalam, University of Oklahoma 1 Grid Computing 101 Karthik Arunachalam IT Professional.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
DOSAR Workshop at Sao Paulo Dick Greenwood What’s Next for DOSAR? Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at the Sao Paulo, Brazil.
Supercomputing in High Energy Physics Horst Severini OU Supercomputing Symposium, September 12-13, 2002.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
…building the next IT revolution From Web to Grid…
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
ISU DOSAR WORKSHOP Dick Greenwood DOSAR/OSG Statement of Work (SoW) Dick Greenwood Louisiana Tech University April 5, 2007.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
THEGrid Workshop Jae Yu THEGrid Workshop UTA, July 8 – 9, 2004 Introduction An Example of Existing Activities What do we want to accomplish at the workshop?
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Outline  Introduction  Higgs Particle Searches for Origin of Mass  Grid Computing Effort  Linear Collider Detector R&D  Conclusions A Quest for the.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
India in DØ Naba K Mondal Tata Institute, Mumbai.
Outline  Higgs Particle Searches for Origin of Mass  Grid Computing  A brief Linear Collider Detector R&D  The  The grand conclusion: YOU are the.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
The State of DOSAR DOSAR VI Workshop at Ole Miss April Dick Greenwood Louisiana Tech University.
What Are They Doing at Fermilab Lately? Don Lincoln Fermilab.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
M.C. Vetterli; SFU/TRIUMF Simon Fraser ATLASATLAS SFU & Canada’s Role in ATLAS M.C. Vetterli Simon Fraser University and TRIUMF SFU Open House, May 31.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
The ATLAS detector … … is composed of cylindrical layers: Tracking detector: Pixel, SCT, TRT (Solenoid magnetic field) Calorimeter: Liquid Argon, Tile.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
OSG All Hands Meeting P. Skubic DOSAR OSG All Hands Meeting March 5-8, 2007 Pat Skubic University of Oklahoma Outline What is DOSAR? History of DOSAR Goals.
6th DOSAR Workshop University Mississippi Apr. 17 – 18, 2008
Grid site as a tool for data processing and data analysis
U.S. ATLAS Tier 2 Computing Center
Southwest Tier 2 Center Status Report
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Southwest Tier 2.
DOSAR: State of Organization
US CMS Testbed.
High Energy Physics at UTA
High Energy Physics at UTA
DØ MC and Data Processing on the Grid
Presentation transcript:

TechFair ‘05 University of Arlington November 16, 2005

What’s the Point? High Energy Particle Physics is a study of the smallest pieces of matter. It investigates (among other things) the nature of the universe immediately after the Big Bang. It also explores physics at temperatures not common for the past 15 billion years (or so). It’s a lot of fun.

Particle Colliders Provide HEP Data Colliders form two countercirculating beams of subatomic particles Particles are accelerated to nearly the speed of light Beams are forced to intersect at detector facilities Detectors inspect and record outcome of particle collisions The Tevatron is currently most powerful collider 2. TeV (2 Trillion degrees) of collision energy of proton and anti-proton beams The Large Hadron Collider (LHC) will become most powerful in TeV (14 Trillion Degrees) of collision energy between two proton beams LHC at CERN in Geneva Switzerland ATLAS Mont Blanc P P

Detector experiments are large collaborations DØ Collaboration 650 Scientists 78 Institutions 18 Countries ATLAS Collaboration 1700 Scientists 150 Institutions 34 Countries

Detector Construction A recorded collision is a snapshot from sensors within detector Detectors are arranged in layers around collision point –Each layer is sensitive to a different physical process –Sensors are arranged spatially within each layer –Sensor outputs are electrical signals

DØ Detector (Tevatron) Weighs 5000 tons Can inspect 3,000,000 collisions/second Will record 50 collisions/second Records approximately 10,000,000 bytes/second Will record 4x10 15 (4 Petabytes) of data in current run 30’ 50’ ATLAS Detector (LHC) Weighs 10,000 tons Can inspect 1,000,000,000 collisions/second Will record 100 collisions/second Records approximately 300,000,000 bytes/second Will record 1.5x10 15 (1,500,000,000,000,000) bytes each year (1.5 PetaByte).

How are computers used in HEP? Computers inspect collisions and store interesting raw data to tape Triggers control filtering Raw data is converted to physics data through reconstruction process Converts electronic signals to physics objects Analysis is performed on reconstructed data Searching for new phenomena Measuring and verifying existing phenomena Monte-Carlo simulations performed to generate pseudo-raw data (MC data) Serves as guide for analysis

What is a Computing Grid? Grid: Geographically distributed computing resources configured for coordinated use Physical resources & networks provide raw capability “Middleware” software ties it together

How HEP Uses the Grid HEP experiments are getting larger –Much more data –Larger collaborations in terms of personnel and countries –Problematic to manage all computing needs at central facility –Solution: Distributed access to data/processors Grids provide access to large amounts of computing resources –Experiment funded resources (Tier1, Tier2 facilities) –Underutilized resources at other experiments’ facilities –University and possibly National Laboratory facilities Grid middleware key to using resources –Provides uniform methods for data movement, job execution, monitoring –Provides single sign-on for access to resources

UTA HEP’s Grid Activities DØ –SAM Grid –Remote MC Production –Student Internships at Fermilab –Data Reprocessing –Offsite Analysis ATLAS –Prototype Tier2 Facility using DPCC –MC Production –Service Challenges –Southwestern Tier2 Facility with OU and Langston Software Development –ATLAS MC Production System Grats/Windmill/PANDA –ATLAS Distributed Analysis System (DIAL) –DØ MC Production System McFARM –Monitoring Systems (SAMGrid, McPerm, McGraph) Founding Member of Distributed Organization for Scientific and Academic Research (DOSAR)

UTA HEP’s Grid Resources Distributed and Parallel Computing Cluster 80 Compute node Linux cluster –Dual Pentium Xeon nodes 2.4 or 2.6 GHz –2GB RAM per node –45 TB of network storage 3 IBM P5-570 machines –8 way 1.5GHz P5 processors –32GB RAM –5 TB of SAN storage Funded through NSF-MRI grant between CSE / HEP / UT-SW Commissioned 9/03 Resource provider for DØ and ATLAS experiments –SAMGrid –Grid3 project –Open Science Grid (OSG)

Figure :Cumulative number of Monte-Carlo events produced since August, 2003 for the D0 collaboration by remote site. UTA DØ Grid Processing MC Production 8/03-9/04Additional DØ Activities P14 FixTMB processing 50 Million events 1.5 TB processed P17 Reprocessing 30 Million Events 6 TB reprocessed

Figure 1: Percentage contribution toward US-ATLAS DC2 production by computing site. UTA ATLAS Grid Processing Figure 2: Percentage contribution toward US-ATLAS Rome production by computing site.

UTA has the first and the only US RAC (Regional Analysis Center) UTA is the only US DØ RAC DOSAR formed around UTA, now a VO in Open Science Grid Mexico/Brazil OU/ LU UAZ Rice LTU UTA KU KSU Ole Miss

PANDA Panda is the next generation data production system for US-ATLAS –Schedules computational jobs to participating grid sites –Manages data placement and delivery of results –Performs bookkeeping to track status of requests UTA responsibilities: –Project Lead –Database schema design –Testing and Integration –Packaging

Other Grid Efforts UTA is the founding member of the Distributed Organization for Scientific and Academic Research (DOSAR) –DOSAR is researching workflows and methodologies for performing analysis of HEP data in a distributed manner. –DOSAR is a recognized Virtual Organization within the Open Science Grid initiative – UTA is collaborating on the DIAL (Distributed Interactive Analysis for Large datasets) project at BNL UTA has joined THEGrid (Texas High Energy Grid) for sharing resources among Texas institutions for furthering HEP work. – UTA HEP has sponsored CSE student internships at Fermilab with exposure to grid software development

UTA Monitoring Applications Developed, implemented and improved by UTA students Number of Jobs % of Total Available CPUs Time from Present (hours) Anticipated CPU Occupation Jobs in Distribute Queue Commissioned and being deployed

ATLAS Southwestern Tier2 Facility US-ATLAS uses Brookhaven National Laboratory (BNL) as national Tier1 center Three Tier2 centers have been funded in 2005: –Southwestern Tier2 (UTA, University of Oklahoma and Langston Univ.) –Northeastern Tier2 (Boston University and Harvard) –Midwestern Tier2 (University of Chicago and Indiana University) Tier2 funding is expected for each center for the duration of the ATLAS experiment (20+ years) UTA’s Kaushik De is Principal Investigator for Tier2 center Initial hardware is expected to be in place and operational by December, 2005 at UTA and OU

ATLAS Southwestern Tier2 Facility UTA’s portion of SWT2 is is expected to be on the order of 50 – 100 times the scale of DPCC – processors –Thousands of TB (few peta bytes) of storage Challenges for the SWT2 center –Network bandwidth needs Recent predictions show a need for 1Gbps bandwidth between Tier 1 and Tier2 by 2008 –Coordination of distributed resources Providing a unified view of SWT2 resources to outside collaborators Management of distributed resources between OU and UTA –Management of resources on campus SWT2 resources likely to be split between UTACC and new Physics Building