1 S-Matrix and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington.

Slides:



Advertisements
Similar presentations
Abstraction Layers Why do we need them? –Protection against change Where in the hourglass do we put them? –Computer Scientist perspective Expose low-level.
Advertisements

1 G2 and ActiveSheets Paul Roe QUT Yes Australia!
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 12 Slide 1 Distributed Systems Design 2.
The Great Academia/Industry Grid Debate November Geoffrey Fox Community Grids Laboratory Indiana University
This product includes material developed by the Globus Project ( Introduction to Grid Services and GT3.
1 On Death, Taxes, & the Convergence of Peer-to-Peer & Grid Computing Adriana Iamnitchi Duke University “Our Constitution is in actual operation; everything.
Seminar Grid Computing ‘05 Hui Li Sep 19, Overview Brief Introduction Presentations Projects Remarks.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
MS DB Proposal Scott Canaan B. Thomas Golisano College of Computing & Information Sciences.
4b.1 Grid Computing Software Components of Globus 4.0 ITCS 4010 Grid Computing, 2005, UNC-Charlotte, B. Wilkinson, slides 4b.
Knowledge Environments for Science: Representative Projects Ian Foster Argonne National Laboratory University of Chicago
Core Grid Functions: A Minimal Architecture for Grids William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (www-itg.lbl.gov/~wej)
e-Science e-Business e-Government and their Technologies Introduction
Possible Architectural Principles for OGSA-UK and other Grids UK e-Science Core Programme Town Meeting London Monday 31st January 2005 “Defining the next.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
1 Advances and Changes in Simulation Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University.
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
DISTRIBUTED COMPUTING
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
E-Science Technology/Middleware (Grid, Cyberinfrastructure) Gap Analysis e-Science Town Meeting Strand Palace Hotel May Geoffrey Fox, Indiana University.
Grid – Path to Pervasive Adoption Mark Linesch Chairman, Global Grid Forum Hewlett Packard Corporation.
1 Introduction to Middleware. 2 Outline What is middleware? Purpose and origin Why use it? What Middleware does? Technical details Middleware services.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
GEM Portal and SERVOGrid for Earthquake Science PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics, Physics.
1 4/23/2007 Introduction to Grid computing Sunil Avutu Graduate Student Dept.of Computer Science.
Copyright © 2002 Intel Corporation. Intel Labs Towards Balanced Computing Weaving Peer-to-Peer Technologies into the Fabric of Computing over the Net Presented.
Web Services. Abstract  Web Services is a technology applicable for computationally distributed problems, including access to large databases What other.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
Cloud Age Time to change the programming paradigm?
Remarks on Grids e-Science CyberInfrastructure and Peer-to-Peer Networks Los Alamos September Geoffrey Fox Community Grids Lab Indiana University.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Authors: Ronnie Julio Cole David
1 Overview of e-Science and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University.
E-Science Technology/Middleware (Grid, Cyberinfrastructure) Gap Analysis and OMII SEAG Meeting DTI June Geoffrey Fox, Indiana University David.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ipgdec5-01 Remarks on Web Services PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce, Shrideep Pallickara, Choonhan Youn Computer Science,
ISERVOGrid Architecture Working Group Brisbane Australia June Geoffrey Fox Community Grids Lab Indiana University
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
Kemal Baykal Rasim Ismayilov
Remarks on OGSA and OGSI e-Science All Hands Meeting September Geoffrey Fox, Indiana University.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
CGL: Community Grids Laboratory Geoffrey Fox Director CGL Professor of Computer Science, Informatics, Physics.
GCE Shell? GGF6 Chicago October Geoffrey Fox Marlon Pierce Indiana University
7. Grid Computing Systems and Resource Management
Some comments on Portals and Grid Computing Environments PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics,
© 2004 IBM Corporation ICSOC2004 Panel Discussion: Grid Systems: What is needed from web service standards? Jeffrey Frey IBM.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Overview of Grid Computing Environments Proposed GGF Information Document G.Fox, D. Gannon, M. Pierce, M. Thomas PTLIU Laboratory for Community Grids Geoffrey.
Data Manipulation with Globus Toolkit Ivan Ivanovski TU München,
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Welcome Grids and Applied Language Theory Dave Berry Research Manager 16 th October 2003.
E-Business e-Science and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University.
SuperComputing 2003 “The Great Academia / Industry Grid Debate” ?
Some Basics of Globus Web Services
University of Technology
iSERVOGrid Architecture Working Group Brisbane Australia June
Some remarks on Portals and Web Services
Core Grid Functions: A Minimal Architecture for Grids
The Narada Event Brokering System: Overview and Extensions
Core Grid Functions: A Minimal Architecture for Grids
Cyberinfrastructure and PolarGrid
Large Scale Distributed Computing
Status of Grids for HEP and HENP
Current and Future Perspectives of Grid Technology Panel
Summary Talk by Fox I S-Matrix theory including unitarity, analyticity, crossing, duality, Reggeons is correct but incomplete Reggeons and particles are.
Presentation transcript:

1 S-Matrix and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN December

2 S-Matrix and PWA We need an amplitude analysis to find most “interesting” resonances If this makes sense, we are effectively parameterizing photon-Reggeon amplitude with resonance at “top” vertex in full (123 in diagram) or partial (12, 23, 31) channel Complicated as off diagonal, one “fake” particle and often more than 2 final particles This requires a lot of approximations whose effect can be estimated with S-Matrix Theory Analyticity, Unitarity, Crossing, Regge Theory, Spin formalism, Duality, Finite Energy Sum Rules Reggeon Exchange for Production  Exchange  Target Regge in Top Vertex

3 Some Lessons from the past I All confusing effects exist and no fundamental (correct) way to remove. So one should: Minimize effect of the hard (insoluble) problems such as “particles from wrong vertex”, “unestimatable exchange effects” sensitive to slope of unclear Regge trajectories, absorption etc. Carefully identify where effects are “additive” and where confusingly overlapping Note many of effects are intrinsically MORE important in multiparticle case than in relatively well studied π N  π N Try to estimate impact of uncertainties from each effect on results It would be very helpful to get systematic very high statistic studies of relatively clean cases where spectroscopy may be less interesting but one can examine uncertainties Possibilities are A 1 A 2 A 3 B 1 peripherally produced and even π N  π π N; K or π beams good

4 S-Matrix Approach S-Matrix ideas that work reasonably include: Regge theory for production process Two-component duality adding Regge dual to Regge to background dual to the Pomeron Can help to identify if a resonance is classic qq or exotic Use of Regge exchange at top vertex to estimate high partial waves in amplitude analysis Finite Energy Sum Rules for top vertex as constraints on low mass amplitudes and most quantitative way of linking high and low masses Ignore Regge Cuts in Production Unitarity effects not included directly due to duality double counting

5 Investigate Uncertainties There are several possible sources of error Errors in Quasi 2-body and limited number of amplitudes approximation Unitarity (final state interactions) Errors in the two-component duality picture Exotic particles are produced and are just different Photon beams, π exchange or some other “classic effect” not present in original πN analyses behaves unexpectedly Failure of quasi two body approximation Regge cuts cannot be ignored Background from other channels Develop tests for these in both “easy” cases (such as “old” meson beam data) and in photon beam data at Jefferson laboratory Investigate all effects on any interesting result from PWA

6 Grid Computing: Making The Global Infrastructure a Reality Note book with Fran Berman and Anthony J.G. Hey, ISBN: Hardcover 1080 Pages Published March I had more fun in days gone by; no more do I write “Skeletons in the Regge Cupboard” or “The Importance of being an Amplitude”

7 Some Further Links A talk on Grid and e-Science was webcast in an Oracle technology series See also the “Gap Analysis” survey of Grid technology This presentation is at Next Semester – course on “e-Science and the Grid” given by Access Grid Write up for May Conference describes proposed Physics Strategy

8 e-Business e-Science and the Grid e-Business captures an emerging view of corporations as dynamic virtual organizations linking employees, customers and stakeholders across the world. The growing use of outsourcing is one example e-Science is the similar vision for scientific research with international participation in large accelerators, satellites or distributed gene analyses. The Grid integrates the best of the Web, traditional enterprise software, high performance computing and Peer- to-peer systems to provide the information technology infrastructure for e-moreorlessanything. A deluge of data of unprecedented and inevitable size must be managed and understood. People, computers, data and instruments must be linked. On demand assignment of experts, computers, networks and storage resources must be supported

9 What is a High Performance Computer? We might wish to consider three classes of multi-node computers 1) Classic MPP with microsecond latency and scalable internode bandwidth (t comm /t calc ~ 10 or so) 2) Classic Cluster which can vary from configurations like 1) to 3) but typically have millisecond latency and modest bandwidth 3) Classic Grid or distributed systems of computers around the network Latencies of inter-node communication – 100’s of milliseconds but can have good bandwidth All have same peak CPU performance but synchronization costs increase as one goes from 1) to 3) Cost of system (dollars per gigaflop) decreases by factors of 2 at each step from 1) to 2) to 3) One should NOT use classic MPP if class 2) or 3) suffices unless some security or data issues dominates over cost-performance One should not use a Grid as a true parallel computer – it can link parallel computers together for convenient access etc.

10 Sources of Grid Technology Grids support distributed collaboratories or virtual organizations integrating concepts from The Web Agents Distributed Objects (CORBA Java/Jini COM) Globus, Legion, Condor, NetSolve, Ninf and other High Performance Computing activities Peer-to-peer Networks With perhaps the Web and P2P networks being the most important for “Information Grids” and Globus for “Compute Grids” Service Architecture based on Web Services most critical feature

11 Raw (HPC) Resources Middleware Database Portal Services System Services Application Service System Services User Services “Core” Grid Typical Grid Architecture

12 A typical Web Service In principle, services can be in any language (Fortran.. Java.. Perl.. Python) and the interfaces can be method calls, Java RMI Messages, CGI Web invocations, totally compiled away (inlining) The simplest implementations involve XML messages (SOAP) and programs written in net friendly languages like Java and Python Payment Credit Card Warehouse Shipping control WSDL interfaces SecurityCatalog Portal Service Web Services

13 What is Happening? Grid ideas are being developed in (at least) two communities Web Service – W3C, OASIS Grid Forum (High Performance Computing, e-Science) Service Standards are being debated Grid Operational Infrastructure is being deployed Grid Architecture and core software being developed Particular System Services are being developed “centrally” – OGSA framework for this in Lots of fields are setting domain specific standards and building domain specific services There is a lot of hype Grids are viewed differently in different areas Largely “computing-on-demand” in industry (IBM, Oracle, HP, Sun) Largely distributed collaboratories in academia

14 Technical Activities of Note Look at different styles of Grids such as Autonomic (Robust Reliable Resilient) New Grid architectures hard due to investment required Critical Services Such as Security – build message based not connection based Notification – event services Metadata – Use Semantic Web, provenance Databases and repositories – instruments, sensors Computing – Submit job, scheduling, distributed file systems Visualization, Computational Steering Fabric and Service Management Network performance Program the Grid – Workflow Access the Grid – Portals, Grid Computing Environments

15 Issues and Types of Grid Services 1) Types of Grid R3 Lightweight P2P Federation and Interoperability 2) Core Infrastructure and Hosting Environment Service Management Component Model Service wrapper/Invocation Messaging 3) Security Services Certificate Authority Authentication Authorization Policy 4) Workflow Services and Programming Model Enactment Engines (Runtime) Languages and Programming Compiler Composition/Development 5) Notification Services 6) Metadata and Information Services Basic including Registry Semantically rich Services and meta- data Information Aggregation (events) Provenance 7) Information Grid Services OGSA-DAI/DAIT Integration with compute resources P2P and database models 8) Compute/File Grid Services Job Submission Job Planning Scheduling Management Access to Remote Files, Storage and Computers Replica (cache) Management Virtual Data Parallel Computing 9) Other services including Grid Shell Accounting Fabric Management Visualization Data-mining and Computational Steering Collaboration 10) Portals and Problem Solving Environments 11) Network Services Performance Reservation Operations

16 OGSA OGSI & Hosting Environments Start with Web Services in a hosting environment Add OGSI to get a Grid service and a component model Add OGSA to get Interoperable Grid “correcting” differences in base platform and adding key functionalities OGSI on Web Services Broadly applicable services: registry, authorization, monitoring, data access, etc., etc. Hosting Environment for WS More specialized services: data replication, workflow, etc., etc. Domain- specific services Network OGSA Environment Possibly OGSA Not OGSA Given to us from on high

17 Integration of Data and Filters One has the OGSA-DAI Data repository interface combined with WSDL of the (Perl, Fortran, Python …) filter User only sees WSDL not data syntax Some non-trivial issues as to where the filtering compute power is Microsoft says filter next to data DB Filter WSDL Of Filter OGSA-DAI Interface

18 Data Technology Components of (Services in) a Computing Grid 1: Job Management Service (Grid Service Interface to user or program client) 2: Schedule and control Execution 1: Plan Execution4: Job Submittal Remote Grid Service 6: File and Storage Access 3: Access to Remote Computers Data 7: Cache Data Replicas 5: Data Transfer 10: Job Status 8: Virtual Data 9: Grid MPI

19 Grid Strategy LHC Computing will be very well established and handling times as much data as GlueX when we need to go into production GriPhyn iVDGL EDG EGEE PPDG GridPP will customize core Grid technology for accelerator-based experiments Transport Data Cache Data Manage initial data analysis and Monte Carlo Not clear if GT2, GT3, OGSI but will certainly be Web Service based Need to keep in close touch with these activities Build GlueX physics analysis consistent with this infrastructure

20 Implementing Grids Need to design a service architecture for GlueX Build on services from HEP and other fields Need some specific gluexML meta-data specifying services and properties specific to GlueX Specify data structures and method interfaces in XML Use portlets for user-interfaces as in Break-up into services where-ever possible but only if “coarse-grain” Module A Module B Method Calls.001 to 1 millisecond Service A Service B Messages 0.1 to 1000 millisecond latency Coarse Grain Service ModelClosely coupled Java/Python …

21 Collage of Portals Earthquakes – NASA Fusion – DoE OGCE Components – NSF Publications -- CGL

22 Approach Convert every code into a Web Service Convert every utility like “visualization” into a Web service Have good support for authoring and manipulating meta-data Use existing code/database technology (SQL/Fortran/C++) linked to “Application Web/OGSA services” XML specification of models, computational steering, scale supported at “Web Service” level as don’t need “high performance” here Allows use of Semantic Grid technology Typical codes WS linking to user and Other WS (data sources) Application WS

23 Raw Data and Compute Resources Middleware Database Portal Services System Services Visualization Service Modeling Services Fitting Service Grid Computing Environments User Services “Core” Grid (Globus) Data Access Service

24 CERN LHC Data Analysis Grid