Download presentation
Presentation is loading. Please wait.
Published byJailyn Salway Modified over 9 years ago
1
16 th Sept’02Nick Brook – University of Bristol1 News from the EB & LCG Nick Brook University of Bristol EB News LCG News Structures Review of RTAGs
2
16 th Sept’02Nick Brook – University of Bristol2 EB News Roger Barlow re-elected as deputy chair of the EB will take over as chair September’03 Reporting structures in place Measures manpower effort and deadlines but also requests expt req ts of UK (Tier1/A) resources First reports appear for Q2’02 – available on EB web page Next submissions are due now
3
16 th Sept’02Nick Brook – University of Bristol3 EB News 6 application submissions to Sheffield All-Hands conference Applications are beginning to deliver their projects New format to GridPP expt sessions All but one application position is now filled (last remaining vacant post will be filled beginning of October) Successful joint ATLAS-LHCb workshop at Coseners http://www.phy.bris.ac.uk/research/pppages/LHCb/coseners/CosenersHouse.htm
4
16 th Sept’02Nick Brook – University of Bristol4 Fundamental Goal of the LCG To help the experiments’ computing projects get the best, most reliable and accurate physics results from the data coming from the detectors Phase 1 – 2002-05 prepare and deploy the environment for LHC computing Phase 2 – 2006-08 acquire, build and operate the LHC computing service
5
16 th Sept’02Nick Brook – University of Bristol5 Phase 1 - High-level Goals development/support for applications – libraries, tools, frameworks, data management (inc. persistency), …….. common components develop/acquire the software for managing a distributed computing system on the scale required for LHC – the local computing fabric, integration of the fabrics into a global grid put in place a pilot service – “proof of concept” – the technology and the distributed analysis environment platform for learning how to manage and use the system provide a solid service for physics and computing data challenges produce a TDR describing the distributed LHC computing system for the first years of LHC running maintain opportunities for re-use of developments outside the LHC programme To prepare and deploy the environment for LHC computing
6
16 th Sept’02Nick Brook – University of Bristol6 The LHC Computing Grid Project Organisation LHCC Reports Reviews Common Computing RRB (funding agencies) Resources Project Execution Board Software and Computing Committee (SC2) Project Overview Board Requirements, Monitoring
7
16 th Sept’02Nick Brook – University of Bristol7 SC2 & PEB Roles SC2 includes the four experiments, Tier 1 Regional Centres SC2 identifies common solutions and sets requirements for the project may use an RTAG – Requirements and Technical Assessment Group limited scope, two-month lifetime with intermediate report one member per experiment + experts PEB manages the implementation organising projects, work packages coordinating between the Regional Centres collaborating with Grid projects organising grid services SC2 approves the work plan, monitors progress
8
16 th Sept’02Nick Brook – University of Bristol8 SC2 Monitors Progress of the Project Receives regular status reports from the PEB Written status report every 6 months –milestones, performance, resources –estimates time and cost to complete Organises a peer-review –about once a year –presentations by the different components of the project –review of documents –review of planning data
9
16 th Sept’02Nick Brook – University of Bristol9 Project Execution Organisation Four areas – each with area project manager Applications Grid Technology Fabrics Grid deployment
10
16 th Sept’02Nick Brook – University of Bristol10 RTAG status –in application software area data persistencycompleted – 5 th April 02 software support processcompleted – 6 th May 02 mathematical librariescompleted – 2 nd May 02 detector geometry descriptionrunning Monte Carlo generatorsrunning applications architectural blueprintrunning detector simulationrunning –in fabric area mass storage requirementscompleted – 3 rd May 02 –in Grid technology and deployment area Grid technology use casescompleted – 7 th June 02 Regional Centre categorisationcompleted – 7 th June 02 Current status of RTAGs (and available reports) on www.cern.ch/lcg/sc2
11
16 th Sept’02Nick Brook – University of Bristol11 Data Persistency (RTAG1) Technology Streaming layer should be implemented using the ROOT framework’s I/O services Components with relational implementations should make no deep assumptions about the underlying technology –Nothing intentionally proposed that precludes implementation using such open source products as MySQL
12
16 th Sept’02Nick Brook – University of Bristol12 Data Persistency (RTAG1) Implementation – POOL Five work package areas: Storage Manager & refs File catalog & Grid integration Collections & Metadata Dictionary & Conversion Infrastructure, Integration & testing http://lcgapp.cern.ch/projects/persist
13
16 th Sept’02Nick Brook – University of Bristol13 Grid Use Cases (RTAG4) 79 page report & 43 use cases Global Summary of EDG Response Use case is already implemented (release 1.2) – 19 –Mostly basic job submission and basic data management –For half of these, WP8 agrees that the functionality is implemented in 1.2, but the implementation is quite a bit more complex than that outlined in the use case (esp. data management). The release 2.0 implementations look simpler. Planned for release 2 – 10 Will be considered for release 3 – 4 Use case not detailed enough – 4 –VO-wide resource allocation to users – HEPCAL did not make strong requirements on security –“Job Splitting” and “Production Job” – were purposely vague in HEPCAL due to lack of clear vision of how massive productions will be run on the Grid. One job auto-split into thousands? Or thousands of jobs somehow logically grouped into one production? Not planned for any release – 7 –Software publishing –Virtual Datasets (reliant on GriPhyn)
14
16 th Sept’02Nick Brook – University of Bristol14 Regional Centres (RTAG6) A service oriented view should be adopted for categorization of regional centres It could be profitable to revisit the overall computing model in terms of services around 2004 The important aspects to categorize RCs are –Commitment to guarantee data management at a high QoS for the lifetime of LHC –Commitment to guarantee state-of-the-art network bandwidth to ensure efficient inter-operation –Commitment to contribute to collaborative services
15
16 th Sept’02Nick Brook – University of Bristol15 LCG Blueprint (RTAG8 - ongoing) Precepts Software structure: STL/utilities, “core” infrastructure, “specialised” infrastructure Component model: APIs (embedding frameworks, “own” plug-ins, end users), physical/logical module granularity, role of abstract interfaces, … Service model: Uniform, flexible access to basic framework functionality Object models: dumb vs. smart, enforced policies with run-time checking, clear and bullet-proof ownership model Distributed operation Global objects Dependencies: minimisation between components, run-time rather than compile-time Interface to external components: generic adapters – version & variant identification Exception handling
16
16 th Sept’02Nick Brook – University of Bristol16 LCG Blueprint (RTAG8 - ongoing) Scripting, interpreter (ROOTCINT, PYTHON) GUI toolkits (to build expt specific interfaces) Graphics (underlying general tools) Analysis tools (histogramming, fitting, graphical representation, …) Math libraries and statistics (already established) Job management Core services (platform indep interface to system resources on LCG platforms – Linux (gnu & intel compilers), Solaris & Windows) Foundation and utility libraries (essentially maths libs & core services) Grid middleware interfaces (already an agreed 2 expt “common” project - GANGA)
17
16 th Sept’02Nick Brook – University of Bristol17 LCG Blueprint (RTAG8 - ongoing) Object dictionary and object model (in context of POOL) Persistency and data management (in context of POOL) Event processing framework (poss. long term common project components) Event model Event generation (ancilliary services & support) Detector simulation (ditto) Detector geometry and materials (standard tools for describing, storing & modeling detector geometry) Trigger/DAQ Event reconstruction Detector calibration
18
16 th Sept’02Nick Brook – University of Bristol18 LCG Summary GridPP input to both overall management structure on LCG and the RTAG activities Activities are beginning to take off Persistency (POOL) Software processing & infrastructure Grid deployment board – Oct 4 th first meeting Interaction with middleware providers (not just EDG) RTAG procedures seems to be slow at taking off Lack of consistency early on – addressed by “Blueprint” RTAG Time consuming, often overlap of necessary experts
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.