1 Advances and Changes in Simulation Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University.

Slides:



Advertisements
Similar presentations
GRADD: Scientific Workflows. Scientific Workflow E. Science laboris Workflows are the new rock and roll of eScience Machinery for coordinating the execution.
Advertisements

National Partnership for Advanced Computational Infrastructure San Diego Supercomputer Center Data Grids for Collection Federation Reagan W. Moore University.
Database Architectures and the Web
1 G2 and ActiveSheets Paul Roe QUT Yes Australia!
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Presentation 7 part 2: SOAP & WSDL. Ingeniørhøjskolen i Århus Slide 2 Outline Building blocks in Web Services SOA SOAP WSDL (UDDI)
Technical Architectures
Distributed Systems Architectures
MS DB Proposal Scott Canaan B. Thomas Golisano College of Computing & Information Sciences.
4b.1 Grid Computing Software Components of Globus 4.0 ITCS 4010 Grid Computing, 2005, UNC-Charlotte, B. Wilkinson, slides 4b.
Course Instructor: Aisha Azeem
Principles for Collaboration Systems Geoffrey Fox Community Grids Laboratory Indiana University Bloomington IN 47404
® IBM Software Group © IBM Corporation IBM Information Server Service Oriented Architecture WebSphere Information Services Director (WISD)
Client/Server Architectures
Help!!! Some Future Semantic Grid Activities CrisisGrid and ServoGrid PTLIU Laboratory for Community Grids Geoffrey Fox Computer Science, Informatics,
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
e-Science e-Business e-Government and their Technologies Introduction
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
Possible Architectural Principles for OGSA-UK and other Grids UK e-Science Core Programme Town Meeting London Monday 31st January 2005 “Defining the next.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
DISTRIBUTED COMPUTING
Lecture 3: Sun: 16/4/1435 Distributed Computing Technologies and Middleware Lecturer/ Kawther Abas CS- 492 : Distributed system.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Architecting Web Services Unit – II – PART - III.
1 S-Matrix and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington.
Grids for Chemical Informatics Randall Bramley, Geoffrey Fox, Dennis Gannon, Beth Plale Computer Science, Informatics, Physics Pervasive Technology Laboratories.
GEM Portal and SERVOGrid for Earthquake Science PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics, Physics.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
Web Services. Abstract  Web Services is a technology applicable for computationally distributed problems, including access to large databases What other.
Michael Woods Sr. Technical Product Manager.
Middleware for Grid Computing and the relationship to Middleware at large ECE 1770 : Middleware Systems By: Sepehr (Sep) Seyedi Date: Thurs. January 23,
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
 Apache Airavata Architecture Overview Shameera Rathnayaka Graduate Assistant Science Gateways Group Indiana University 07/27/2015.
Remarks on Grids e-Science CyberInfrastructure and Peer-to-Peer Networks Los Alamos September Geoffrey Fox Community Grids Lab Indiana University.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Authors: Ronnie Julio Cole David
Presented by Scientific Annotation Middleware Software infrastructure to support rich scientific records and the processes that produce them Jens Schwidder.
Grid Security: Authentication Most Grids rely on a Public Key Infrastructure system for issuing credentials. Users are issued long term public and private.
1 Overview of e-Science and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ipgdec5-01 Remarks on Web Services PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce, Shrideep Pallickara, Choonhan Youn Computer Science,
Presented by Jens Schwidder Tara D. Gibson James D. Myers Computing & Computational Sciences Directorate Oak Ridge National Laboratory Scientific Annotation.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
ISERVOGrid Architecture Working Group Brisbane Australia June Geoffrey Fox Community Grids Lab Indiana University
Remarks on OGSA and OGSI e-Science All Hands Meeting September Geoffrey Fox, Indiana University.
CGL: Community Grids Laboratory Geoffrey Fox Director CGL Professor of Computer Science, Informatics, Physics.
GCE Shell? GGF6 Chicago October Geoffrey Fox Marlon Pierce Indiana University
7. Grid Computing Systems and Resource Management
Some comments on Portals and Grid Computing Environments PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics,
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
WebFlow High-Level Programming Environment and Visual Authoring Toolkit for HPDC (desktop access to remote resources) Tomasz Haupt Northeast Parallel Architectures.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Directions in eScience Interoperability and Science Clouds June Interoperability in Action – Standards Implementation.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Cyberinfrastructure Overview of Demos Townsville, AU 28 – 31 March 2006 CREON/GLEON.
A service Oriented Architecture & Web Service Technology.
E-Business e-Science and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University.
Clouds , Grids and Clusters
SuperComputing 2003 “The Great Academia / Industry Grid Debate” ?
Grid Computing.
Database Architectures and the Web
University of Technology
iSERVOGrid Architecture Working Group Brisbane Australia June
Some remarks on Portals and Web Services
Grid Services B.Ramamurthy 12/28/2018 B.Ramamurthy.
The Anatomy and The Physiology of the Grid
The Anatomy and The Physiology of the Grid
Presentation transcript:

1 Advances and Changes in Simulation Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN January

2 Trends in Simulation Research the HPCC High Performance Computing and Communication Initiative Established Parallel Computing Developed wonderful algorithms – especially in partial differential equation and particle dynamics areas Almost no useful software except for MPI – messaging between parallel computer nodes 1995-now Internet explosion and development of Web Service distributed system model Replaces CORBA, Java RMI, HLA, COM etc now: almost no academic work in core simulation Major projects like ASCI (DoE) and HPCMO (DoD) thrive 2003-? Data Deluge apparent and Grid links Internet and HPCC with focus on data-simulation integration

3 Some Implications of Trends New requirements corresponding to Grid/e-Science technology Managing distributed data Integration of data with simulations Internet (Web Service) software gives better infrastructure for building simulation environments for both event driven and time stepped cases Build Problem Solving Environments in terms of Web Services for capabilities like Generate Mesh or Visualize Adopt Web Service Workflow model for computing with “Rule of Millisecond” No new ideas for core parallel computing – just better software infrastructure and some new applications Data assimilation needs new algorithms and architectures – Queen Bee Architecture

4 e-Business e-Science and the Grid e-Business captures an emerging view of corporations as dynamic virtual organizations linking employees, customers and stakeholders across the world. e-Science is the similar vision for scientific research with international participation in large accelerators, satellites or distributed gene analyses. The Grid or CyberInfrastructure integrates the best of the Web, Agents, traditional enterprise software, high performance computing and Peer-to-peer systems to provide the information technology e-infrastructure for e-moreorlessanything. A deluge of data of unprecedented and inevitable size must be managed and understood. People, computers, data and instruments must be linked. On demand assignment of experts, computers, networks and storage resources must be supported

5 Some Important Styles of Grids Computational Grids were origin of concepts and link computers across the globe – high latency stops this from being used as parallel machine Knowledge and Information Grids link sensors and information repositories as in Virtual Observatories or BioInformatics More detail on next slide Collaborative Grids link multidisciplinary researchers across laboratories and universities Community Grids focus on Grids involving large numbers of peers rather than focusing on linking major resources – links Grid and Peer-to-peer network concepts Semantic Grid links Grid, and AI community with Semantic web (ontology/meta-data enriched resources) and Agent concepts Grid Service Farms supply services-on-demand as in collaboration, GIS support, filter

6 Information/Knowledge Grids Distributed (10’s to 1000’s) of data sources (instruments, file systems, curated databases …) Data Deluge: 1 (now) to 100’s petabytes/year (2012) Moore’s law for Sensors Possible filters assigned dynamically (on-demand) Run image processing algorithm on telescope image Run Gene sequencing algorithm on compiled data Needs decision support front end with “what-if” simulations Metadata (provenance) critical to annotate data Integrate across experiments as in multi-wavelength astronomy Data Deluge comes from pixels/year available

7 Virtual Observatory Astronomy Grid Integrate Experiments RadioFar-InfraredVisible Visible + X-ray Dust Map Galaxy Density Map

8 e-Business and (Virtual) Organizations Enterprise Grid supports information system for an organization; includes “university computer center”, “(digital) library”, sales, marketing, manufacturing … Outsourcing Grid links different parts of an enterprise together Manufacturing plants with designers Animators with electronic game or film designers and producers Coaches with aspiring players (e-NCAA or e-NFL etc.) Outsourcing will become easier …….. Customer Grid links businesses and their customers as in many web sites such as amazon.com e-Multimedia can use secure peer-to-peer Grids to link creators, distributors and consumers of digital music, games and films respecting rights Distance education Grid links teacher at one place, students all over the place, mentors and graders; shared curriculum, homework, live classes …

9 In flight data Airline Maintenance Centre Ground Station Global Network Such as SITA Internet, , pager Engine Health (Data) Center DAME Rolls Royce and UK e-Science Program Distributed Aircraft Maintenance Environment ~ Gigabyte per aircraft per Engine per transatlantic flight ~5000 engines

10 NASA Aerospace Engineering Grid It takes a distributed virtual organization to design, simulate and build a complex system like an aircraft

11 e-Defense and e-Crisis Grids support Command and Control and provide Global Situational Awareness Link commanders and frontline troops to themselves and to archival and real-time data; link to what-if simulations Dynamic heterogeneous wired and wireless networks Security and fault tolerance essential System of Systems; Grid of Grids The command and information infrastructure of each ship is a Grid; each fleet is linked together by a Grid; the President is informed by and informs the national defense Grid Grids must be heterogeneous and federated Crisis Management and Response enabled by a Grid linking sensors, disaster managers, and first responders with decision support Define and Build DoD relevant Services – Collaboration, Sensors, GIS, Database etc.

12 Database Closely Coupled Compute Nodes Analysis and Visualization Repositories Federated Databases Sensor Nets Streaming Data Loosely Coupled Filters SERVOGrid for e-Geoscience ? Discovery Services SERVOGrid – Solid Earth Research Virtual Observatory will link Australia, Japan, USA ……

13 SERVOGrid Requirements Seamless Access to Data repositories and large scale computers Integration of multiple data sources including sensors, databases, file systems with analysis system Including filtered OGSA-DAI (Grid database access) Rich meta-data generation and access with SERVOGrid specific Schema extending openGIS (Geography as a Web service) standards and using Semantic Grid Portals with component model for user interfaces and web control of all capabilities Collaboration to support world-wide work Basic Grid tools: workflow and notification NOT metacomputing

14 Large Scale Parallel Computers Old Style Metacomputing Grid Analysis and Visualization Spread a single large Problem over multiple supercomputers Large Disks

15 Classes of Computing Grid Applications Running “Pleasing Parallel Jobs” as in United Devices, Entropia (Desktop Grid) “cycle stealing systems” Can be managed (“inside” the enterprise as in Condor) or more informal (as in Computing-on-demand in Industry where jobs spawned are perhaps very large (SAP, Oracle …) Support distributed file systems as in Legion (Avaki), Globus with (web-enhanced) UNIX programming paradigm Particle Physics will run some 30,000 simultaneous jobs this way Pipelined applications linking data/instruments, compute, visualization Seamless Access where Grid portals allow one to choose one of multiple resources with a common interfaces

16 When is a High Performance Computer? We might wish to consider three classes of multi-node computers 1) Classic MPP with microsecond latency and scalable internode bandwidth (t comm /t calc ~ 10 or so) 2) Classic Cluster which can vary from configurations like 1) to 3) but typically have millisecond latency and modest bandwidth 3) Classic Grid or distributed systems of computers around the network Latencies of inter-node communication – 100’s of milliseconds but can have good bandwidth All have same peak CPU performance but synchronization costs increase as one goes from 1) to 3) Cost of system (dollars per gigaflop) decreases by factors of 2 at each step from 1) to 2) to 3) One should NOT use classic MPP if class 2) or 3) suffices unless some security or data issues dominates over cost-performance One should not use a Grid as a true parallel computer – it can link parallel computers together for convenient access etc.

17 What is Happening? Grid ideas are being developed in (at least) two communities Web Service – W3C, OASIS Grid Forum (High Performance Computing, e-Science) Open Middleware Infrastructure Institute OMII currently only in UK but maybe spreads to EU and USA Service Standards are being debated Grid Operational Infrastructure is being deployed Grid Architecture and core software being developed Particular System Services are being developed “centrally” – OGSA framework for this in Lots of fields are setting domain specific standards and building domain specific services Grids are viewed differently in different areas Largely “computing-on-demand” in industry (IBM, Oracle, HP, Sun) Largely distributed collaboratories in academia

18 A typical Web Service In principle, services can be in any language (Fortran.. Java.. Perl.. Python) and the interfaces can be method calls, Java RMI Messages, CGI Web invocations, totally compiled away (inlining) The simplest implementations involve XML messages (SOAP) and programs written in net friendly languages like Java and Python Payment Credit Card Warehouse Shipping control WSDL interfaces SecurityCatalog Portal Service Web Services

19 Services and Distributed Objects A web service is a computer program running on either the local or remote machine with a set of well defined interfaces (ports) specified in XML (WSDL) Web Services (WS) have many similarities with Distributed Object (DO) technology but there are some (important) technical and religious points (not easy to distinguish) CORBA Java COM are typical DO technologies Agents are typically SOA (Service Oriented Architecture) Both involve distributed entities but Web Services are more loosely coupled WS interact with messages; DO with RPC (Remote Procedure Call) DO have “factories”; WS manage instances internally and interaction- specific state not exposed and hence need not be managed DO have explicit state (statefull services); WS use context in the messages to link interactions (statefull interactions) Claim: DO’s do NOT scale; WS build on experience (with CORBA) and do scale

20 Technical Activities of Note Look at different styles of Grids such as Autonomic (Robust Reliable Resilient) New Grid architectures hard due to investment required Critical Services Such as Security – build message based not connection based Notification – event services Metadata – Use Semantic Web, provenance Databases and repositories – instruments, sensors Computing – Submit job, scheduling, distributed file systems Visualization, Computational Steering Fabric and Service Management Network performance Program the Grid – Workflow Access the Grid – Portals, Grid Computing Environments

21 System and Application Services? There are generic Grid system services: security, collaboration, persistent storage, universal access OGSA (Open Grid Service Architecture) is implementing these as extended Web Services An Application Web Service is a capability used either by another service or by a user It has input and output ports – data is from sensors or other services Consider Satellite-based Sensor Operations as a Web Service Satellite management (with a web front end) Each tracking station is a service Image Processing is a pipeline of filters – which can be grouped into different services Data storage is an important system service Big services built hierarchically from “basic” services Portals are the user (web browser) interfaces to Web services

22 Satellite Science Grid Environment

23 Issues and Types of Grid Services 1) Types of Grid R3 Lightweight P2P Federation and Interoperability 2) Core Infrastructure and Hosting Environment Service Management Component Model Service wrapper/Invocation Messaging 3) Security Services Certificate Authority Authentication Authorization Policy 4) Workflow Services and Programming Model Enactment Engines (Runtime) Languages and Programming Compiler Composition/Development 5) Notification Services 6) Metadata and Information Services Basic including Registry Semantically rich Services and meta- data Information Aggregation (events) Provenance 7) Information Grid Services OGSA-DAI/DAIT Integration with compute resources P2P and database models 8) Compute/File Grid Services Job Submission Job Planning Scheduling Management Access to Remote Files, Storage and Computers Replica (cache) Management Virtual Data Parallel Computing 9) Other services including Grid Shell Accounting Fabric Management Visualization Data-mining and Computational Steering Collaboration 10) Portals and Problem Solving Environments 11) Network Services Performance Reservation Operations

24 Grid Services for the Education Process “Learning Object” XML standards already exist WebCT Blackboard etc. could be converted to Service Model Synchronous Collaboration Tools including Audio/Video Conferencing natural Grid Services as in Registration Homework submission and Performance (grading) Authoring of Curriculum Online laboratories for real and virtual instruments Quizzes of various types (multiple choice, random parameters) Assessment data access and analysis Scheduling of courses and mentoring sessions Asynchronous access, data-mining and knowledge discovery Learning Plan agents to guide students and teachers

25 Database Coarse grain simulations Analysis and Visualization Repositories Federated Databases Field Trip Data Streaming Data Loosely Coupled Filters Sensors ? Discovery Services SERVOGrid for e-Education

26 (i)SERVO Web (Grid) Services for PSE Programs: All applications wrapped using proxy strategy as Services Job Submission: supports remote batch and shell invocations –Used to execute simulation codes (VC suite, GeoFEST, etc.), mesh generation (Akira/Apollo) and visualization packages (RIVA, GMT). File management: –Uploading, downloading, backend crossloading (i.e. move files between remote servers) –Remote copies, renames, etc. Job monitoring Workflow: Apache Ant-based remote service orchestration –For coupling related sequences of remote actions, such as RIVA movie generation. Database services: support SQL queries Data services: support interactions with XML-based fault and surface observation data. –World should develop Open Source Grid/Web services for Geographical Information Systems as per openGIS specification

27 Building PSE’s with the Rule of the Millisecond I Typical Web Services are used in situations with interaction delays (network transit) of 100’s of milliseconds Typical Web Services are used in situations with interaction delays (network transit) of 100’s of milliseconds But basic message-based interaction architecture only incurs fraction of a millisecond delay But basic message-based interaction architecture only incurs fraction of a millisecond delay Thus use Web Services to build ALL PSE components Thus use Web Services to build ALL PSE components Use messages and NOT method/subroutine call or RPCUse messages and NOT method/subroutine call or RPC Interaction Nugget1Nugget2 Nugget3Nugget4 Data

28 Building PSE’s with the Rule of the Millisecond II Messaging has several advantages over scripting languages Messaging has several advantages over scripting languages Collaboration trivial by sharing messagesCollaboration trivial by sharing messages Software Engineering due to greater modularitySoftware Engineering due to greater modularity Web Services do/will have wonderful supportWeb Services do/will have wonderful support “Loose” Application coupling uses workflow technologies “Loose” Application coupling uses workflow technologies Find characteristic interaction time (millisecond programs; microseconds MPI and particle) and use best supported architecture at this level Find characteristic interaction time (millisecond programs; microseconds MPI and particle) and use best supported architecture at this level Two levels: Web Service (Grid) and C/C++/C#/Fortran/Java/PythonTwo levels: Web Service (Grid) and C/C++/C#/Fortran/Java/Python Major difficulty in frameworks is NOT building them but rather in supporting them Major difficulty in frameworks is NOT building them but rather in supporting them IMHO only hope is to always minimize life-cycle support risksIMHO only hope is to always minimize life-cycle support risks Simulation/science is too small a field to support much!Simulation/science is too small a field to support much! Expect to use DIFFERENT technologies at each level even though possible to do everything with one technology Expect to use DIFFERENT technologies at each level even though possible to do everything with one technology Trade off support versus performance/customizationTrade off support versus performance/customization

29 Why we can dream of using HTTP and that slow stuff We have at least three tiers in computing environment Client (user portal) “Middle Tier” (Web Servers/brokers) Back end (databases, files, computers etc.) In Grid programming, we use HTTP (and used to use CORBA and Java RMI) in middle tier ONLY to manipulate a proxy for real job Proxy holds metadata Control communication in middle tier only uses metadata “Real” (data transfer) high performance communication in back end

30 Integration of Data and Filters One has the OGSA-DAI Data repository interface combined with WSDL of the (Perl, Fortran, Python …) filter User only sees WSDL not data syntax Some non-trivial issues as to where the filtering compute power is Microsoft says filter next to data DB Filter WSDL Of Filter OGSA-DAI Interface

31 HPC Simulation Data Filter Data Filter Data Filter Data Filter Data Filter Distributed Filters massage data For simulation Other Grid and Web Services Analysis Control Visualize SERVOGrid (Complexity) Computing Model Grid OGSA-DAI Grid Services This Type of Grid integrates with Parallel computing Multiple HPC facilities but only use one at a time Many simultaneous data sources and sinks Grid Data Assimilation

32 Data Assimilation Data assimilation implies one is solving some optimization problem which might have Kalman Filter like structure Due to data deluge, one will become more and more dominated by the data (N obs much larger than number of simulation points). Natural approach is to form for each local (position, time) patch the “important” data combinations so that optimization doesn’t waste time on large error or insensitive data. Data reduction done in natural distributed fashion NOT on HPC machine as distributed computing most cost effective if calculations essentially independent Filter functions must be transmitted from HPC machine

33 Distributed Filtering HPC Machine Distributed Machine Data Filter N obs local patch 1 N filtered local patch 1 Data Filter N obs local patch 2 N filtered local patch 2 Geographically Distributed Sensor patches N obs local patch >> N filtered local patch ≈ Number_of_Unknowns local patch Send needed Filter Receive filtered data In simplest approach, filtered data gotten by linear transformations on original data based on Singular Value Decomposition of Least squares matrix Factorize Matrix to product of local patches

34 Two-level Programming I The paradigm implicitly assumes a two-level Programming Model We make a Service (same as a “distributed object” or “computer program” running on a remote computer) using conventional technologies C++ Java or Fortran Monte Carlo module Data streaming from a sensor or Satellite Specialized (JDBC) database access Such services accept and produce data from users files and databases The Grid is built by coordinating such services assuming we have solved problem of programming the service Service Data

35 Two-level Programming II The Grid is discussing the composition of distributed services with the runtime interfaces to Grid as opposed to UNIX pipes/data streams Familiar from use of UNIX Shell, PERL or Python scripts to produce real applications from core programs Such interpretative environments are the single processor analog of Grid Programming Some projects like GrADS from Rice University are looking at integration between service and composition levels but dominant effort looks at each level separately Service1Service2 Service3Service4

36 Conclusions Grids are inevitable and pervasive Simulations should build on commodity technology Can expect Web Services and Grids to merge with a common set of general principles but different implementations with different scaling and functionality trade-offs We will be flooded with data, information and purported knowledge Re-examine where to use data and where to use simulation Double the size of your supercomputer versus integrating sensors with it! Should be re-examining software architectures – use explicit messaging where-ever possible PSE’s, HLA, Command and Control, GIS, Collaboration, data federation all are impacted by service based architectures

37 Grid Computing: Making The Global Infrastructure a Reality Based on work done in preparing book edited with Fran Berman and Anthony J.G. Hey, ISBN: Hardcover 1080 Pages Published March

38 Other References See the webcast in an Oracle technology series See also the “Gap Analysis” I can send you nicely printed versions of this End of this is a good collection of references and it gives both a general survey of current Grids and specific examples from UK Appendix with more details is: White Paper on Grids in DoD See also GlobusWorld and the Grid Forum