LHC Computing Review (Jan. 14, 2003)Paul Avery1 University of Florida GriPhyN, iVDGL and LHC Computing.

Slides:



Advertisements
Similar presentations
Virtual Data and the Chimera System* Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science.
Advertisements

A. Arbree, P. Avery, D. Bourilkov, R. Cavanaugh, S. Katageri, G. Graham, J. Rodriguez, J. Voeckler, M. Wilde CMS & GriPhyN Conference in High Energy Physics,
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
US-CMS Meeting (May 19, 2001)Paul Avery1 US-CMS Meeting (UC Riverside) May 19, 2001 Grids for US-CMS and CMS Paul Avery University of Florida
Experience with ATLAS Data Challenge Production on the U.S. Grid Testbed Kaushik De University of Texas at Arlington CHEP03 March 27, 2003.
University of Michigan (May 8, 2003)Paul Avery1 University of Florida Grids for 21 st Century Data Intensive.
SCHOOL OF INFORMATION UNIVERSITY OF MICHIGAN GriPhyN: Grid Physics Network and iVDGL: International Virtual Data Grid Laboratory.
Beauty 2003 (October 14, 2003)Paul Avery1 University of Florida Grid Computing in High Energy Physics Enabling Data Intensive Global.
The Grid as Infrastructure and Application Enabler Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
Korean HEP Grid Workshop (Nov. 8, 2002)Paul Avery1 University of Florida U.S. Physics Data Grid Projects.
Experiment Applications: applying the power of the grid to real science Rick Cavanaugh University of Florida GriPhyN/iVDGL External Advisory Committee.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
GriPhyN EAC Meeting (Jan. 7, 2002)Paul Avery1 University of Florida GriPhyN External Advisory Committee.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Grappa: Grid access portal for physics applications Shava Smallen Extreme! Computing Laboratory Department of Physics Indiana University.
XCAT Science Portal Status & Future Work July 15, 2002 Shava Smallen Extreme! Computing Laboratory Indiana University.
CANS Meeting (December 1, 2004)Paul Avery1 University of Florida UltraLight U.S. Grid Projects and Open Science Grid Chinese American.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
CASC Meeting (July 14, 2004)Paul Avery1 University of Florida Physics Grids and Open Science Grid CASC Meeting Washington, DC July 14,
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Grid Testbed Activities in US-CMS Rick Cavanaugh University of Florida 1. Infrastructure 2. Highlight of Current Activities 3. Future Directions NSF/DOE.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Outreach Workshop (Mar. 1, 2002)Paul Avery1 University of Florida Global Data Grids for 21 st Century.
University of Mississippi (Nov. 11, 2003)Paul Avery1 University of Florida Data Grids Enabling Data Intensive Global Science Physics.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
K. De UTA Grid Workshop April 2002 U.S. ATLAS Grid Testbed Workshop at UTA Introduction and Goals Kaushik De University of Texas at Arlington.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
LIGO- G Z Planning Meeting (Dec 2002)LSC Member Institution (UT Brownsville) 1 Manuela Campanelli The University of Texas at Brownsville
GriPhyN Status and Project Plan Mike Wilde Mathematics and Computer Science Division Argonne National Laboratory.
Ruth Pordes, Fermilab CD, and A PPDG Coordinator Some Aspects of The Particle Physics Data Grid Collaboratory Pilot (PPDG) and The Grid Physics Network.
Manuela Campanelli The University of Texas at Brownsville EOT-PACI Alliance All-Hands Meeting 30 April 2003 Urbana, Illinois GriPhyN.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GriPhyN EAC Meeting (Jan. 7, 2002)Carl Kesselman1 University of Southern California GriPhyN External Advisory Committee Meeting Gainesville,
GriPhyN EAC Meeting (Apr. 12, 2001)Paul Avery1 University of Florida Opening and Overview GriPhyN External.
US ATLAS Grid Projects Rob Gardner Indiana University Mid Year Review of US ATLAS Computing NSF Headquarters, Arlington VA June 20, 2002
DOE/NSF Review (Nov. 15, 2000)Paul Avery (LHC Data Grid)1 LHC Data Grid The GriPhyN Perspective DOE/NSF Baseline Review of US-CMS Software and Computing.
GriPhyN Project Overview Paul Avery University of Florida GriPhyN NSF Project Review January 2003 Chicago.
Atlas Grid Status - part 1 Jennifer Schopf ANL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
The GriPhyN Planning Process All-Hands Meeting ISI 15 October 2001.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
LIGO- G Z EAC Meeting (Jan 2003)LSC Member Institution (UT Brownsville) 1 Manuela Campanelli The University of Texas at Brownsville
LIGO- G Z EAC Meeting (Jan 2003)LSC Member Institution (UT Brownsville) 1 Manuela Campanelli The University of Texas at Brownsville
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
The Grid Effort at UF Presented by Craig Prescott.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
US CMS Centers & Grids – Taiwan GDB Meeting1 Introduction l US CMS is positioning itself to be able to learn, prototype and develop while providing.
LIGO-G E LIGO Scientific Collaboration Data Grid Status Albert Lazzarini Caltech LIGO Laboratory Trillium Steering Committee Meeting 20 May 2004.
GriPhyN EAC Meeting (Jan. 7, 2002)Paul Avery1 Integration with iVDGL è International Virtual-Data Grid Laboratory  A global Grid laboratory (US, EU, Asia,
US Grid Efforts Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
Manuela Campanelli The University of Texas at Brownsville GriPhyN NSF Project Review January 2003 Chicago Education & Outreach.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
GriPhyN Project Paul Avery, University of Florida, Ian Foster, University of Chicago NSF Grant ITR Research Objectives Significant Results Approach.
GriPhyN Management Mike Wilde University of Chicago, Argonne Paul Avery University of Florida GriPhyN NSF Project.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
Management & Coordination Paul Avery, Rick Cavanaugh University of Florida Ian Foster, Mike Wilde University of Chicago, Argonne
US ATLAS – new grid initiatives John Huth Harvard University US ATLAS Software Meeting: BNL Aug 03.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
Planning Session. ATLAS(-CMS) End-to-End Demo Kaushik De is the Demo Czar Need to put team together Atlfast production jobs –Atlfast may be unstable over.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
US Grid Efforts Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
U.S. ATLAS Grid Production Experience
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
Presentation transcript:

LHC Computing Review (Jan. 14, 2003)Paul Avery1 University of Florida GriPhyN, iVDGL and LHC Computing DOE/NSF Computing Review of LHC Computing Lawrence Berkeley Laboratory Jan

LHC Computing Review (Jan. 14, 2003)Paul Avery2 GriPhyN/iVDGL Summary  Both funded through NSF ITR program  GriPhyN: $11.9M (NSF) + $1.6M (matching)(2000 – 2005)  iVDGL:$13.7M (NSF) + $2M (matching)(2001 – 2006)  Basic composition  GriPhyN:12 funded universities, SDSC, 3 labs(~80 people)  iVDGL:16 funded institutions, SDSC, 3 labs(~70 people)  Expts:US-CMS, US-ATLAS, LIGO, SDSS/NVO  Large overlap of people, institutions, management  Grid research vs Grid deployment  GriPhyN:2/3 “CS” + 1/3 “physics”( 0% H/W)  iVDGL:1/3 “CS” + 2/3 “physics”(20% H/W)  iVDGL:$2.5M Tier2 hardware($1.4M LHC)  Physics experiments provide frontier challenges  Virtual Data Toolkit (VDT) in common

LHC Computing Review (Jan. 14, 2003)Paul Avery3 GriPhyN Institutions  U Florida  U Chicago  Boston U  Caltech  U Wisconsin, Madison  USC/ISI  Harvard  Indiana  Johns Hopkins  Northwestern  Stanford  U Illinois at Chicago  U Penn  U Texas, Brownsville  U Wisconsin, Milwaukee  UC Berkeley  UC San Diego  San Diego Supercomputer Center  Lawrence Berkeley Lab  Argonne  Fermilab  Brookhaven

LHC Computing Review (Jan. 14, 2003)Paul Avery4  U FloridaCMS  CaltechCMS, LIGO  UC San DiegoCMS, CS  Indiana UATLAS, iGOC  Boston UATLAS  U Wisconsin, MilwaukeeLIGO  Penn StateLIGO  Johns HopkinsSDSS, NVO  U ChicagoCS  U Southern CaliforniaCS  U Wisconsin, MadisonCS  Salish KootenaiOutreach, LIGO  Hampton UOutreach, ATLAS  U Texas, BrownsvilleOutreach, LIGO  FermilabCMS, SDSS, NVO  BrookhavenATLAS  Argonne LabATLAS, CS iVDGL Institutions T2 / Software CS support T3 / Outreach T1 / Labs (not funded)

LHC Computing Review (Jan. 14, 2003)Paul Avery Physicists 150 Institutes 32 Countries Driven by LHC Computing Challenges  Complexity:Millions of detector channels, complex events  Scale:PetaOps (CPU), Petabytes (Data)  Distribution:Global distribution of people & resources

LHC Computing Review (Jan. 14, 2003)Paul Avery6 Goals: PetaScale Virtual-Data Grids Virtual Data Tools Request Planning & Scheduling Tools Request Execution & Management Tools Transforms Distributed resources (code, storage, CPUs, networks) Security and Policy Services Other Grid Services Interactive User Tools Production Team Single Investigator Workgroups Raw data source  Petaflops  Petabytes  Performance Resource Management Services

LHC Computing Review (Jan. 14, 2003)Paul Avery7 Experiment (e.g., CMS) Global LHC Data Grid Online System CERN Computer Center > 20 TIPS USA Korea Russia UK Institute MBytes/s Gbps > 1 Gbps Gbps ~0.6 Gbps Tier 0 Tier 1 Tier 3 Tier 4 Tier0/(  Tier1)/(  Tier2) ~ 1:1:1 Tier 2 Physics cache PCs, other portals Institute Tier2 Center

LHC Computing Review (Jan. 14, 2003)Paul Avery8 Coordinating U.S. Grid Projects: Trillium  Trillium: GriPhyN + iVDGL + PPDG  Large overlap in project leadership & participants  Large overlap in experiments, particularly LHC  Joint projects (monitoring, etc.)  Common packaging, use of VDT & other GriPhyN software  Organization from the “bottom up”  With encouragement from funding agencies NSF & DOE  DOE (OS) & NSF (MPS/CISE) working together  Complementarity: DOE (labs), NSF (universities)  Collaboration of computer science/physics/astronomy encouraged  Collaboration strengthens outreach efforts See Ruth Pordes talk

LHC Computing Review (Jan. 14, 2003)Paul Avery9 iVDGL: Goals and Context  International Virtual-Data Grid Laboratory  A global Grid laboratory (US, EU, Asia, South America, …)  A place to conduct Data Grid tests “at scale”  A mechanism to create common Grid infrastructure  A laboratory for other disciplines to perform Data Grid tests  A focus of outreach efforts to small institutions  Context of iVDGL in US-LHC computing program  Mechanism for NSF to fund proto-Tier2 centers  Learn how to do Grid operations (GOC)  International participation  DataTag  UK e-Science programme: support 6 CS Fellows per year in U.S.  None hired yet. Improve publicity? 

LHC Computing Review (Jan. 14, 2003)Paul Avery10 iVDGL: Management and Coordination Project Coordination Group US External Advisory Committee GLUE Interoperability Team Collaborating Grid Projects TeraGridEDGAsiaDataTAG BTEV LCG? BioALICEGeo? D0PDCCMS HI ? US Project Directors Outreach Team Core Software Team Facilities Team Operations Team Applications Team International Piece US Project Steering Group U.S. Piece

LHC Computing Review (Jan. 14, 2003)Paul Avery11 iVDGL: Work Teams  Facilities Team  Hardware (Tier1, Tier2, Tier3)  Core Software Team  Grid middleware, toolkits  Laboratory Operations Team (GOC)  Coordination, software support, performance monitoring  Applications Team  High energy physics, gravity waves, digital astronomy  New groups: Nuc. physics? Bioinformatics? Quantum Chemistry?  Education and Outreach Team  Web tools, curriculum development, involvement of students  Integrated with GriPhyN, connections to other projects  Want to develop further international connections

LHC Computing Review (Jan. 14, 2003)Paul Avery12 US-iVDGL Sites (Sep. 2001) UF Wisconsin Fermilab BNL Indiana Boston U SKC Brownsville Hampton PSU J. Hopkins Caltech Tier1 Tier2 Tier3 Argonne UCSD/SDSC

LHC Computing Review (Jan. 14, 2003)Paul Avery13 New iVDGL Collaborators  New experiments in iVDGL/WorldGrid  BTEV, D0, ALICE  New US institutions to join iVDGL/WorldGrid  Many new ones pending  Participation of new countries (different stages)  Korea, Japan, Brazil, Romania, …

LHC Computing Review (Jan. 14, 2003)Paul Avery14 US-iVDGL Sites (Spring 2003) UF Wisconsin Fermilab BNL Indiana Boston U SKC Brownsville Hampton PSU J. Hopkins Caltech Tier1 Tier2 Tier3 FIU FSU Arlington Michigan LBL Oklahoma Argonne Vanderbilt UCSD/SDSC NCSA Partners?  EU  CERN  Brazil  Korea  Japan

An Inter-Regional Center for High Energy Physics Research and Educational Outreach (CHEPREO) at Florida International University  E/O Center in Miami area  iVDGL Grid Activities  CMS Research  AMPATH network  Int’l Activities (Brazil, etc.) Status:  Proposal submitted Dec  Presented to NSF review panel Jan. 7-8, 2003  Looks very positive

LHC Computing Review (Jan. 14, 2003)Paul Avery16 US-LHC Testbeds  Significant Grid Testbeds deployed by US-ATLAS & US-CMS  Testing Grid tools in significant testbeds  Grid management and operations  Large productions carried out with Grid tools

LHC Computing Review (Jan. 14, 2003)Paul Avery17 US-ATLAS Grid Testbed Grappa: Manages overall grid experience Magda: Distributed data management and replication Pacman: Defines and installs software environments DC1 production with grat: Data challenge ATLAS simulations Instrumented Athena: Grid monitoring of Atlas analysis apps. vo-gridmap: Virtual organization management Gridview: Monitors U.S. Atlas resources

LHC Computing Review (Jan. 14, 2003)Paul Avery18 US-CMS Testbed Brazil

LHC Computing Review (Jan. 14, 2003)Paul Avery19 Commissioning the CMS Grid Testbed  A complete prototype  CMS Production Scripts  Globus, Condor-G, GridFTP  Commissioning: Require production quality results!  Run until the Testbed "breaks"  Fix Testbed with middleware patches  Repeat procedure until the entire Production Run finishes!  Discovered/fixed many Globus and Condor-G problems  Huge success from this point of view alone  … but very painful

LHC Computing Review (Jan. 14, 2003)Paul Avery20 CMS Grid Testbed Production Remote Site 2 Master Site Remote Site 1 IMPALA mop_submitter DAGMan Condor-G GridFTP Batch Queue GridFTP Batch Queue GridFTP Remote Site N Batch Queue GridFTP

LHC Computing Review (Jan. 14, 2003)Paul Avery21 LinkerScriptGenerator Configurator Requirements Self Description MasterScript"DAGMaker"VDL MOP Chimera MCRunJob Production Success on CMS Testbed  Recent results  150k events generated: 1.5 weeks continuous running  1M event run just completed on larger testbed: 8 weeks

LHC Computing Review (Jan. 14, 2003)Paul Avery22 US-LHC Proto-Tier2 (2001) Router >1 RAID WAN FEth/GEth Switch “Flat” switching topology Data Server nodes Dual GHz, P3 1 TByte RAID

LHC Computing Review (Jan. 14, 2003)Paul Avery23 US-LHC Proto-Tier2 (2002/2003) Router GEth/FEth Switch GEth Switch Data Server >1 RAID WAN “Hierarchical” switching topology Switch GEth/FEth nodes Dual 2.5 GHz, P4 2-6 TBytes RAID

LHC Computing Review (Jan. 14, 2003)Paul Avery24 Creation of WorldGrid  Joint iVDGL/DataTag/EDG effort  Resources from both sides (15 sites)  Monitoring tools (Ganglia, MDS, NetSaint, …)  Visualization tools (Nagios, MapCenter, Ganglia)  Applications: ScienceGrid  CMS:CMKIN, CMSIM  ATLAS:ATLSIM  Submit jobs from US or EU  Jobs can run on any cluster  Demonstrated at IST2002 (Copenhagen)  Demonstrated at SC2002 (Baltimore)

LHC Computing Review (Jan. 14, 2003)Paul Avery25 WorldGrid

LHC Computing Review (Jan. 14, 2003)Paul Avery26 WorldGrid Sites

LHC Computing Review (Jan. 14, 2003)Paul Avery27 GriPhyN Progress  CS research  Invention of DAG as a tool describing workflow  System to describe, execute workflow: DAGMan  Much new work on planning, scheduling, execution  Virtual Data Toolkit + Pacman  Several major releases this year: VDT  New packaging tool: Pacman  VDT + Pacman vastly simplify Grid software installation  Used by US-ATLAS, US-CMS  LCG will use VDT for core Grid middleware  Chimera Virtual Data System (more later)

LHC Computing Review (Jan. 14, 2003)Paul Avery28 Virtual Data Concept  Data request may  Compute locally/remotely  Access local/remote data  Scheduling based on  Local/global policies  Cost Major facilities, archives Regional facilities, caches Local facilities, caches Fetch item

LHC Computing Review (Jan. 14, 2003)Paul Avery29 Virtual Data: Derivation and Provenance  Most scientific data are not simple “measurements”  They are computationally corrected/reconstructed  They can be produced by numerical simulation  Science & eng. projects are more CPU and data intensive  Programs are significant community resources (transformations)  So are the executions of those programs (derivations)  Management of dataset transformations important!  Derivation: Instantiation of a potential data product  Provenance: Exact history of any existing data product  Programs are valuable, like data. They should be community resources.  We already do this, but manually!

LHC Computing Review (Jan. 14, 2003)Paul Avery30 Transformation Derivation Data product-of execution-of consumed-by/ generated-by “I’ve detected a muon calibration error and want to know which derived data products need to be recomputed.” “I’ve found some interesting data, but I need to know exactly what corrections were applied before I can trust it.” “I want to search a database for 3 muon SUSY events. If a program that does this analysis exists, I won’t have to write one from scratch.” “I want to apply a forward jet analysis to 100M events. If the results already exist, I’ll save weeks of computation.” Virtual Data Motivations (1)

LHC Computing Review (Jan. 14, 2003)Paul Avery31 Virtual Data Motivations (2)  Data track-ability and result audit-ability  Universally sought by scientific applications  Facilitates tool and data sharing and collaboration  Data can be sent along with its recipe  Repair and correction of data  Rebuild data products—c.f., “make”  Workflow management  A new, structured paradigm for organizing, locating, specifying, and requesting data products  Performance optimizations  Ability to re-create data rather than move it  Needed: Automated, robust system

LHC Computing Review (Jan. 14, 2003)Paul Avery32 “Chimera” Virtual Data System  Virtual Data API  A Java class hierarchy to represent transformations & derivations  Virtual Data Language  Textual for people & illustrative examples  XML for machine-to-machine interfaces  Virtual Data Database  Makes the objects of a virtual data definition persistent  Virtual Data Service (future)  Provides a service interface (e.g., OGSA) to persistent objects  Version 1.0 available  To be put into VDT 1.1.6?

LHC Computing Review (Jan. 14, 2003)Paul Avery33 Virtual Data Catalog Object Model

LHC Computing Review (Jan. 14, 2003)Paul Avery34  Virtual Data Language (VDL)  Describes virtual data products  Virtual Data Catalog (VDC)  Used to store VDL  Abstract Job Flow Planner  Creates a logical DAG (dependency graph)  Concrete Job Flow Planner  Interfaces with a Replica Catalog  Provides a physical DAG submission file to Condor-G  Generic and flexible  As a toolkit and/or a framework  In a Grid environment or locally Logical Physical Abstract Planner VDC Replica Catalog Concrete Planner DAX DAGMan DAG VDL XML Chimera as a Virtual Data System XML

LHC Computing Review (Jan. 14, 2003)Paul Avery35 Size distribution of galaxy clusters? Galaxy cluster size distribution Chimera Virtual Data System + GriPhyN Virtual Data Toolkit + iVDGL Data Grid (many CPUs) Chimera Application: SDSS Analysis

LHC Computing Review (Jan. 14, 2003)Paul Avery36 Virtual Data and LHC Computing  US-CMS (Rick Cavanaugh talk)  Chimera prototype tested with CMS MC (~200K test events)  Currently integrating Chimera into standard CMS production tools  Integrating virtual data into Grid-enabled analysis tools  US-ATLAS (Rob Gardner talk)  Integrating Chimera into ATLAS software  HEPCAL document includes first virtual data use cases  Very basic cases, need elaboration  Discuss with LHC expts: requirements, scope, technologies  New ITR proposal to NSF ITR program($15M)  Dynamic Workspaces for Scientific Analysis Communities  Continued progress requires collaboration with CS groups  Distributed scheduling, workflow optimization, …  Need collaboration with CS to develop robust tools

LHC Computing Review (Jan. 14, 2003)Paul Avery37 Summary  Very good progress on many fronts in GriPhyN/iVDGL  Packaging: Pacman + VDT  Testbeds (development and production)  Major demonstration projects  Productions based on Grid tools using iVDGL resources  WorldGrid providing excellent experience  Excellent collaboration with EU partners  Looking to collaborate with more international partners  Testbeds, monitoring, deploying VDT more widely  New directions  Virtual data a powerful paradigm for LHC computing  Emphasis on Grid-enabled analysis  Extending Chimera virtual data system to analysis