Status of the US-CMS “Core Applications Software” Project Ian Fisk UCSD Acting Deputy Level 2 Project Manager US-CMS FNAL Oversight Panel October 24, 2000.

Slides:



Advertisements
Similar presentations
1 Generic logging layer for the distributed computing by Gene Van Buren Valeri Fine Jerome Lauret.
Advertisements

Case Tools Trisha Cummings. Our Definition of CASE  CASE is the use of computer-based support in the software development process.  A CASE tool is a.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Simulation Project Major achievements (past 6 months 2007)
Network Management Overview IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
GRID DATA MANAGEMENT PILOT (GDMP) Asad Samar (Caltech) ACAT 2000, Fermilab October , 2000.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Core Application Software Activities Ian Fisk US-CMS Physics Meeting April 20, 2001.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
Introduction to Computer Technology
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 18 Slide 1 Software Reuse 2.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
ROOT An object oriented HEP analysis framework.. Computing in Physics Physics = experimental science =>Experiments (e.g. at CERN) Planning phase Physics.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Software and Computing FNAL Internal Review of USCMS Software and Computing David Stickland Princeton University CMS Software and Computing Deputy.
-Nikhil Bhatia 28 th October What is RUP? Central Elements of RUP Project Lifecycle Phases Six Engineering Disciplines Three Supporting Disciplines.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Ianna Gaponenko, Northeastern University, Boston The CMS IGUANA Project1 George Alverson, Ianna Gaponenko, and Lucas Taylor Northeastern University, Boston.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
Java Analysis Studio Status Update 12 May 2000 Altas Software Week Tony Johnson
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
HPS Online Software Discussion Jeremy McCormick, SLAC Status and Plans.
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
NOVA Networked Object-based EnVironment for Analysis P. Nevski, A. Vaniachine, T. Wenaus NOVA is a project to develop distributed object oriented physics.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
1 CMPT 275 High Level Design Phase Modularization.
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
SEAL Core Libraries and Services CLHEP Workshop 28 January 2003 P. Mato / CERN Shared Environment for Applications at LHC.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
VICOMTECH VISIT AT CERN CERN 2013, October 3 rd & 4 th O.COUET CERN/PH/SFT DATA VISUALIZATION IN HIGH ENERGY PHYSICS THE ROOT SYSTEM.
The CMS Simulation Software Julia Yarba, Fermilab on behalf of CMS Collaboration 22 m long, 15 m in diameter Over a million geometrical volumes Many complex.
0 Fermilab SW&C Internal Review Oct 24, 2000 David Stickland, Princeton University CMS Software and Computing Status The Functional Prototypes.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
CMS Computing and Core-Software Report to USCMS-AB (Building a Project Plan for CCS) USCMS AB Riverside, May 18, 2001 David Stickland, Princeton University.
NOVA A Networked Object-Based EnVironment for Analysis “Framework Components for Distributed Computing” Pavel Nevski, Sasha Vanyashin, Torre Wenaus US.
Firmware - 1 CMS Upgrade Workshop October SLHC CMS Firmware SLHC CMS Firmware Organization, Validation, and Commissioning M. Schulte, University.
System/SDWG Update Management Council Face-to-Face Flagstaff, AZ August 22-23, 2011 Sean Hardman.
Jean-Roch Vlimant, CERN Physics Performance and Dataset Project Physics Data & MC Validation Group McM : The Evolution of PREP. The CMS tool for Monte-Carlo.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
5 Novembre 2001 Vincenzo Innocente AFT Agenda 1 AFT Tasks l Architecture l Framework l Framework specializations l Utility Toolkit l Graphics tools l Data.
Giulio Eulisse, Northeastern University CHEP’04, Interlaken, 27th Sep - 1st Oct, 2004 CHEP’04 IGUANA Interactive Graphics Project:
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Ianna Gaponenko, Northeastern University, Boston The CMS IGUANA Project1 George Alverson, Ianna Gaponenko and Lucas Taylor Northeastern University, Boston.
Geant4 User Workshop 15, 2002 Lassi A. Tuura, Northeastern University IGUANA Overview Lassi A. Tuura Northeastern University,
The V-Atlas Event Visualization Program J. Boudreau, L. Hines, V. Tsulaia University of Pittsburgh A. Abdesselam University of Oxford T. Cornelissen NIKHEF.
Lucas Taylor, Northeastern University User Analysis Environment October 1999, CERN 1st Internal Review of CMS Software and Computing User Analysis.
CPT Week, November , 2002 Lassi A. Tuura, Northeastern University Core Framework Infrastructure Lassi A. Tuura Northeastern.
USCMS May 2002Jim Branson 1 Physics in US CMS US CMS Annual Collaboration Meeting May 2002 FSU Jin Branson.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
Follow-up to SFT Review (2009/2010) Priorities and Organization for 2011 and 2012.
VI/ CERN Dec 4 CMS Software Architecture vs Hybrid Store Vincenzo Innocente CMS Week CERN, Dec
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES The Common Solutions Strategy of the Experiment Support group.
HEPVis May, 2001 Lassi A. Tuura, Northeastern University Coherent and Non-Invasive Open Analysis Architecture Lassi A. Tuura.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
US ATLAS Physics & Computing
Simulation and Physics
Planning next release of GAUDI
Presentation transcript:

Status of the US-CMS “Core Applications Software” Project Ian Fisk UCSD Acting Deputy Level 2 Project Manager US-CMS FNAL Oversight Panel October 24, 2000

Ian Fisk, UCSD Core Applications Software Introduction to CAS Overview of CAS CAS Currently Consists of 4 Sub-Projects in the WBS  2.1 CMS Software Architecture  2.2 Interactive User Analysis (IGUANA)  2.3 Distributed Data Management and Processing  2.4 Support

Ian Fisk, UCSD Core Applications Software CMS Architecture 2.1  It is the intention of CMS to deploy a coherent architecture for all aspects of physics data processing, including simulation, higher-level triggering, reconstruction and selection, physics analysis and visualization. The software architecture project seeks to facilitate this by ensuring that the problems we are solving are well researched and by ensuring that the core on which the software will be built on is well engineered and understood by the community CAFE  Architecture Document Publishing Tools  Domain Analysis  Document Project Management  Architectural Views Documentation  Design Evolution Overall Architecture and Framework Subsystem Architecture  Detector Geometry Description  Simulation Sub-Architecture  Reconstruction Sub-Architecture

Ian Fisk, UCSD Core Applications Software CMS Architecture CAFE  In order to have a forum to discuss CMS architecture and take input from the Collaboration about issues of design, the CAFE project was formed.  The CAFE project covers the documentation side of the architecture. It provides tools for the document project, analyses the problem domain, manages the documentation, documents the different aspects of the architecture and feeds input back to architecture to evolve it. CAFE has been developing the infrastructure necessary to produce the entire architectural documentation. They have been researching to develop an understanding of the problems we are solve. Gathering input from the Collaboration and propagating the new ideas back into the architectural implementation.  This process is being handled by a CAS engineer.

Ian Fisk, UCSD Core Applications Software CMS Architecture Detector Geometry Description  This task will provide an environment for creating, manipulating, and using the parameters describing the CMS detector in a consistent manner. In particular, it covers the geometrical description of the detector elements at various levels (full engineering detail, full GEANT detail, fast simulation, trigger tower geometries, etc.), associated material properties, magnetic field map, etc. The Detector Description sub-system will serve a number of clients, including OSCAR, the Fast Simulation, ORCA, the Calibration, and the User Analysis Environment.  The prototype will use a generic interface/viewpoint framework, based on XML and mediators, which is being developed for CMS-wide data access.

Ian Fisk, UCSD Core Applications Software CMS Architecture 2.1 Future Plans  Complete documentation and analysis of architecture in the CAFE project.  Complete functional prototype for Generic Detector Geometry description sub-project  Develop Architecture and Framework for use in some of the Reconstruction Architecture. NOT physics reconstruction code, but the underlying framework for fast efficient physics code.  General CARF (CMS Architecture and Framework) Development.

Ian Fisk, UCSD Core Applications Software IGUANA 2.2  The IGUANA software project addresses interactive visualization software needs for three domains: graphical user interfaces (GUI's) interactive detector and event visualization interactive data analysis and presentation for use in a variety of areas such as offline simulation and reconstruction, data analysis, and test beams.  Tasks include the assessment of use-cases and requirements and the evaluation, integration, adaptation, verification, deployment, and support in the CMS environment of visualization software from HEP, academia, the public domain, and the commercial sector.  Pre-existing software is exploited as much as possible to optimize the use of the resources available.

Ian Fisk, UCSD Core Applications Software IGUANA 2.2 General Strategy  Focus on a sustainable medium to long-term strategy  Provide a general set of tools (not single application) for a wide variety of applications and environments for both experienced developers and non-expert users  Strong emphasis on modularity and use of standards to address the issues of scalability, maintenance, deployment, support, long-term evolution,…  Pro-actively exploit software developed elsewhere For example...  HEP(e.g. HTL, HepODBMS, HEPVis, Lizard,…  Public domain(e.g. Qt GUI + extensions, MESA OpenGL,…)  Commercial(e.g. OpenInventor, NAG_C, …) IGUANA work involves evaluation, integration, and support as well as developing extensions and CMS-specific software

Ian Fisk, UCSD Core Applications Software IGUANA LHC++ or HEP Public- domain Commercial Event Display Graphical User Interfaces Histograms, persistent tags Plotting Fitting and Statistical analysis IGUANA & Related Software Modules

Ian Fisk, UCSD Core Applications Software IGUANA Functional Prototype Milestone  Milestone was delayed from June 2000 to October 2000 Shortage of professional software engineering manpower Alignment with 1st release of CERN Lizard interactive analysis This milestone is satisfied (imminent release of IGUANA 2.2.0) Documented Requirements Software Infrastructure Set of Software Prototypes, Packages, and Documentation Now is the time for reflection and future planning Document what was learned and current software Broad discussion of where we are going, priorities,input from the Collaboration

Ian Fisk, UCSD Core Applications Software IGUANA Graphical User Interface  IGUANA evaluation and prototyping with emphasis on OO design, C++ API, and standards (facilitates integration) Functionality, extendibility, widespread adoption, support,… Qt library has all the usual widgets Many other special purpose widgets also available

Ian Fisk, UCSD Core Applications Software IGUANA Detector and Event Visualization  Usual emphasis on OO design, C++ API, and standards, functionality, extendibility, widespread adoption, support,….  Build extensions for event display based on HEP / public-domain / commercial software  Development of specific CMS program (Cmscan with CARF & ORCA) Some Details:  Basic graphics technologies X11, Qt, OpenGL, OpenInventor, SoQt and QGL  IGUANA Viewers performant 3D rendering rotations, translations, zoom, slicing, visibility control,...  Interfaces to GEANT3 (CMSIM) & OpenInventor Full CMS GEANT3 detector could be displayed (very slow!) IGUANA implements sensible choices of volumes to display  Event Visualization Implemented within context of ORCA Currently (ORCA 4.3.0) about half of the obvious reconstructed objects you would like to see are visualizable (work in progress)

Ian Fisk, UCSD Core Applications Software IGUANA Detector and Event Visualization IGUANA Viewer Scene Controller

Ian Fisk, UCSD Core Applications Software IGUANA Interactive Data Analysis and Presentation  Statistical / Numerical Analysis Responsibility of CERN/IT API group Key issue is long-term support for minimization engine  NAG_C engine works in parallel to MINUIT w/ same C++ API  Histograms / Tags Responsibility of CERN/IT/API (HTL & HepODBMS products) Extensions /variations on tags…see other talks  Plotting 1 year ago, not well covered, hence IGUANA developed a plotter Same basis now adopted by CERN/IT/API for their “Qplotter”  General Purpose Analysis Many IGUANA prototypes; future emphasis on data interface CERN/IT/API “Lizard” interactive analysis tool: Oct 2000

Ian Fisk, UCSD Core Applications Software IGUANA Many Prototypes HTL Browser & Plotter Fitting Tag Browser HTL Browser & Plotter

Ian Fisk, UCSD Core Applications Software IGUANA 2.2 Future Directions Functional prototype phase finished for all of CMS software—re-examining and prioritising use cases  Graphical User Interface Continuing support “only” a matter of manpower  Detector and Event Visualisation Basic functionality in place with ORCA Committed to steady development  Interactive Data Analysis and Presentation Very large range of prototypes and products that could be developed Choosing a few key directions not covered by others

Ian Fisk, UCSD Core Applications Software IGUANA Interactive Data Analysis and Presentation  Continue work with CERN/IT division to utilize Lizard, which includes a paw-like front-end to existing back-end packages (HTL, HepODBMS, Qplotter, HepFitting)  Move towards closer integration with data We can do much more and better than just a N-tuple today  CMS has made a big jump in data accessibility A widely used, very powerful object model Our user interfaces to the new data model need to catch up Focus on issues of integration: a user-friendly and productive environment built from existing software  Federation management, browsing databases, running different types of analysis jobs—in addition to “mere” presentation  There is a great range of possibilities for user tools, exciting things.

Ian Fisk, UCSD Core Applications Software Distributed Data Management and Processing 2.3  The Distributed Data Management and Processing software project addresses issues of database replication and access and process management in four sections: Distributed Task Management Distributed Database Management Distributed Production Tools System Simulation  Distributed Data Management and Processing will provide tools for the CMS distributed computing model. Allowing submission, monitoring, and control of processes on remote sites; automatic replication and synchronization of databases between region centers; and tools to facilitate production, database access, and system monitoring.  This task aims to provide tools needed for use in production today as it develops more advanced tools for the future.

Ian Fisk, UCSD Core Applications Software DDMP Distributed Task Scheduling  A system to efficiently handle process management over the computing grid.  Distributed Task Scheduling sub-project has a working prototype. The service allows clients to submit monitor and terminate jobs as a set. A scheduling mechanism that allows selection of processors based on processor type, load, and availability of dataset. Replicated states are maintained so that computations will not be lost if a server fails.  System has been tested on 32 processors with the ORCA production software.  System scaled to 64 processors  Documentation is in progress.

Ian Fisk, UCSD Core Applications Software DDMP Distributed Database Management  Developing tools to replicate and synchronize databases easily between regional centers.  First Tool came as an investigative prototype written in Perl.  The Second Prototype is based on Grid Data Management Pilot (GDMP) Tool Development, which is a Globus Middleware based toolkit, for database replication and synchronization. Catalogues of the contents in the database are distributed to subscribing sites. Database files are staged if necessary from the HPSS, transferred to the site, and automatically attached to the local federation. The tools include the ability to resume transfer from checkpoints in the case of network failure. Web publishing of transfer progress. Ready for use in the fall ORCA production.

Ian Fisk, UCSD Core Applications Software Integration into the CMS environment Physics software CheckDB script Production federation User federation MSS Stage & Purge scripts catalog Copy file to MSS Update catalog Purge file Generate new catalog Publish new catalog Subscriber’s list Write DBDB completeness check Stage file (opt) trigger GDMP export catalog GDMP import catalog GDMP server Generate import catalog Replicate files trigger User federation catalog MSS Stage & Purge scripts Copy file to MSS Purge file Transfer & attach trigger write read CMS environment GDMP system CMS/GDMP interface wan Site B Site A

Ian Fisk, UCSD Core Applications Software DDMP MONARC and System Simulation  The MONARC collaboration began in 1998 with the charge of developing a toolkit to use for in modeling large scale computing systems for LHC experiments.  The MONARC toolkit was used to model the CMS spring production using ORCA As a reminder the spring production was the first large enough to attempt to accurately simulate pile-up events. Combining previously simulated pile-up events with signal events. This involves lot of reading a writing from database servers.  The quality of simulation was only made possible by entering accurate and detailed performance measurements from the production system into the simulation. Considerable work has gone into system monitoring tools.  As an indication of the maturity of the simulation, it has begun used to evaluate Task Scheduling Systems on models of proposed future CMS production facilities: using standard techniques and self configuring neural networks.

Ian Fisk, UCSD Core Applications Software DDMP MONARC and System Simulation  Simulation GUI

Ian Fisk, UCSD Core Applications Software DDMP MONARC and System Simulation Below are simulation examples of network traffic and CPU efficiency Jet Muon Measurement Simulation

Ian Fisk, UCSD Core Applications Software DDMP 2.3 Future Plans  Productizing and Integrating Task Scheduler and Database Replicator to create tools to support the CMS distributed computing model in which jobs are submit to data and data is moved to jobs to maximize efficiency.  Implementation of Request Redirection Protocol in Objectivity on the Federated Database Servers for the Fall production. Should dramatically increase the availability of the database servers.  Continued work on MONARC simulation: simulation of fall production with additional input parameters and a detailed study of tape access to determine modes and guidelines to determine what will be the likely quality of service.

Ian Fisk, UCSD Core Applications Software Support 2.4  One of the primary goals of hiring dedicated software professionals to work within the CMS Software and Computing project was to offer support to US users and developers so that they would be able to participate fully in CMS physics. The Support Software Project is broken into three areas Developer Support User Support Software Support Tools  Developer Support Seeks to provide off-project developers with help from professional software engineers: designing, integrating, optimizing, and porting their code.  User Support ensures that documentation is available for publicly released CMS software packages, that current and useful examples exist for CMS software packages, and user have been trained on the CMS software necessary for physics success.

Ian Fisk, UCSD Core Applications Software Support User Support  Software Configuration Support ensures that publicly released CMS software is properly versioned, is distributable using the agreed upon tools (cvs), and can be configured and built (SCRAM).  A few examples: Properly documented and versioned IGUANA release  All 7 CAS engineers participated in the recent ORCA Tutorial at Fermilab, which over 60 people attended.

Ian Fisk, UCSD Core Applications Software SummarySummary  After hiring the 7 CAS engineers and seeing their skills we have modified the project plan to be coherent.  We have a lot of ideas for the future, but we also have a number of tools that work today.  We are poised to make a significant contribution to the general CMS architecture and design.  We are developing tools that will improve the ways people do physics analysis.  We are developing tools to support the data grid and the distributed computing model.  We have reached a critical mass to offer user and developer software support to CMS physicists