Status of Grid & RPC-Tests Stand DAQ(PU) Sumit Saluja Programmer EHEP Group Deptt. of Physics Panjab University Chandigarh.

Slides:



Advertisements
Similar presentations
WGISS #19 Plenary, CONAE, Cordoba, Argentina, March 2005 Cluster and Grid Project: Status & Update Pakorn Apaphant Geo-Informatics and Space Technology.
Advertisements

Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
NIKHEF Testbed 1 Plans for the coming three months.
Information Technology Center Introduction to High Performance Computing at KFUPM.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
New Cluster for Heidelberg TRD(?) group. New Cluster OS : Scientific Linux 3.06 (except for alice-n5) Batch processing system : pbs (any advantage rather.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Gareth Smith RAL PPD HEP Sysman. April 2003 RAL Particle Physics Department Site Report.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
MICE CM26 March '10Jean-Sebastien GraulichSlide 1 Detector DAQ Issues o Achievements Since CM25 o DAQ System Upgrade o Luminosity Monitors o Sequels of.
SAS Grid at Statistics Canada BY: Yves DeGuire Statistics Canada June 12, 2014.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
VC MICO Report July 07Jean-Sébastien GraulichSlide 1 Main news  DAQ test bench running in Geneva 2 LDCs connected to 2 different PCs, 1 GDCs DAQ trigger.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
MaterialsHub - A hub for computational materials science and tools.  MaterialsHub aims to provide an online platform for computational materials science.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Alain Romeyer - 15/06/20041 CMS farm Mons Final goal : included in the GRID CMS framework To be involved in the CMS data processing scheme.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
RPC STATUS AT PANJAB UNIVERSITY Panjab University, Chandigarh India-CMS Meeting, 28 th July, 2011.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
Development of the distributed monitoring system for the NICA cluster Ivan Slepov (LHEP, JINR) Mathematical Modeling and Computational Physics Dubna, Russia,
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
The DCS lab. Computer infrastructure Peter Chochula.
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
Interactive Terascale Particle Visualization Ellsworth, Green, Moran (NASA Ames Research Center)
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
Brief introduction about “Grid at LNS”
Experience of PROOF cluster Installation and operation
Mattias Wadenstein Hepix 2012 Fall Meeting , Beijing
PC Farms & Central Data Recording
Computing Board Report CHIPP Plenary Meeting
MaterialsHub - A hub for computational materials science and tools.
NGS computation services: APIs and Parallel Jobs
M. Gulmini, G, Maron, N. Toniolo, L. Zangrando
Welcome to our Nuclear Physics Computing System
Welcome to our Nuclear Physics Computing System
ALICE Data Challenges Fons Rademakers Click to add notes.
Production Manager Tools (New Architecture)
Presentation transcript:

Status of Grid & RPC-Tests Stand DAQ(PU) Sumit Saluja Programmer EHEP Group Deptt. of Physics Panjab University Chandigarh

Present Status of Tier3 Grid We have setup Tier 3 Grid Computing Cluster with Sun Server X4140. In cluster we have One head node and three compute nodes with 32 cores & 8 GB memory per node. CMS related software are installed on the cluster. 3TB disk space in Our servers and planning to buy 12 TB more disk space EHEP Group Department of Physics Panjab University Chandigarh

NKN 1G fiber was Laid and very soon it will be in working state 20 system are attached with the clusters Batch jobs are running in the clusters EHEP Group Department of Physics Panjab University Chandigarh Contd……

Batch Job Submission Er. Sumit Saluja EHEP Group Department of Physics Panjab University Chandigarh

Tier3 Grid EHEP Group Department of Physics Panjab University Chandigarh

DAQ Software for Ethernet Based CAMAC Replacing existing PC interfaced CAMAC to Ethernet based CAMAC controller. Ethernet Based DAQ software is in Developing Phase. Scalar module is developed and in Operations ADC and TDC Module are in Developing stage and soon it will be in operations. 1 st version of DAQ software is of Command line and 2 nd version of software will be purely Graphical User Interface(GUI) In first weak of September version 1 will be ready EHEP Group Department of Physics Panjab University Chandigarh

DAQ Software for Ethernet Based CAMAC EHEP Group Department of Physics Panjab University Chandigarh Version 1 Scalar Module

Future Plans GRID Plans Planning to buy more computing nodes for Tier3 center. We are in processing to setup video conferencing center in our Tier3 lab DAQ Plans Processing for replacement of CAMAC with VME. Order placed for VME, TDC, Scaler, etc. Planning for developing VME software. EHEP Group Department of Physics Panjab University Chandigarh

Thanks EHEP Group Department of Physics Panjab University Chandigarh