Presentation is loading. Please wait.

Presentation is loading. Please wait.

TICER Summer School, August 24th 2006

Similar presentations


Presentation on theme: "TICER Summer School, August 24th 2006"— Presentation transcript:

1 TICER Summer School, August 24th 2006
Thursday 24th August 2006 Dave Berry & Malcolm Atkinson National e-Science Centre, Edinburgh TICER Summer School, August 24th 2006

2 Digital Libraries, Grids & E-Science
What is E-Science? What is Grid Computing? Data Grids Requirements Examples Technologies Data Virtualisation The Open Grid Services Architecture Challenges TICER Summer School, August 24th 2006

3 TICER Summer School, August 24th 2006
What is e-Science? TICER Summer School, August 24th 2006

4 TICER Summer School, August 24th 2006
What is e-Science? Goal: to enable better research in all disciplines Method: Develop collaboration supported by advanced distributed computation to generate, curate and analyse rich data resources From experiments, observations, simulations & publications Quality management, preservation and reliable evidence to develop and explore models and simulations Computation and data at all scales Trustworthy, economic, timely and relevant results to enable dynamic distributed collaboration Facilitating collaboration with information and resource sharing Security, trust, reliability, accountability, manageability and agility TICER Summer School, August 24th 2006

5 climateprediction.net and GENIE
Largest climate model ensemble >45,000 users, >1,000,000 model years Response of Atlantic circulation to freshwater forcing Climateprediction: Altruistic computing, outreach to schools, multiple runs of simpler models then statistics of distribution of results illustrating model uncertainty / sensitivity. Genie: again value of making it possible to share data sources, submit jobs easily as ensembles that explore a parameter space contributing to shared data that is then analysed / visualised. Each an example of significant behavioural change propagating among the scientists studying the Natural Environment. 2K 10K

6 Integrative Biology Tackling two Grand Challenge research questions:
What causes heart disease? How does a cancer form and grow? Together these diseases cause 61% of all UK deaths Building a powerful, fault-tolerant Grid infrastructure for biomedical science Enabling biomedical researchers to use distributed resources such as high-performance computers, databases and visualisation tools to develop coupled multi-scale models of how these killer diseases develop. Note different teams model different aspects of the heart. Their geographic distribution shown on the next slide. Courtesy of David Gavaghan & IB Team

7 Biomedical Research Informatics Delivered by Grid Enabled Services
Portal SyntenyGrid Service blast + These series of examples allow motivation + show the kind of multi-site, multi-team, multi-discipline collaboration. Biomedical Research Informatics Delivered by Grid Enabled Services NeSC (Edinburgh and Glasgow) and IBM Supporting project for CFG project Cardiovascular Functional Genomics. T Generating data on hypertension Rat, Mouse, Human genome databases Variety of tools used BLAST, BLAT, Gene Prediction, visualisation, … Variety of data sources and formats Microarray data, genome DBs, project partner research data, medical records, … Aim is integrated infrastructure supporting Data federation Security

8 eDiaMoND: Screening for Breast Cancer
1 Trust  Many Trusts Collaborative Working Audit capability Epidemiology Letters Radiology reporting systems eDiaMoND Grid 2ndary Capture Or FFD Case Information X-Rays and Digital Reading SMF Case and Reading Information CAD Temporal Comparison Screening Electronic Patient Records Assessment/ Symptomatic Biopsy Symptomatic/Assessment Information Training Manage Training Cases Perform Training 3D Images Patients Other Modalities MRI PET Ultrasound Better access to Case information And digital tools Idea: multiple medical regions’ radiographers collaborate for training, comparator data, and perhaps back up / load sharing. This would also provide an sufficiently large pool of data to enable epidemiology on a sufficient scale to study rarer syndromes and presentations. Difficult to anticipate constraints and impediments from existing working practices, e.g. regional variations in description of lesions, worries about loss of personal contact with patients and process, worries about loss of jobs, … Supplement Mentoring With access to digital Training cases and sharing Of information across clinics TICER Summer School, August 24th 2006 Provided by eDiamond project: Prof. Sir Mike Brady et al.

9 E-Science Data Resources
Curated databases Public, institutional, group, personal Online journals and preprints Text mining and indexing services Raw storage (disk & tape) Replicated files Persistent archives Registries TICER Summer School, August 24th 2006

10 TICER Summer School, August 24th 2006©
EBank Slide from Jeremy Frey TICER Summer School, August 24th 2006©

11 Biomedical data – making connections
12181 acatttctac caacagtgga tgaggttgtt ggtctatgtt ctcaccaaat ttggtgttgt cagtctttta aattttaacc tttagagaag agtcatacag tcaatagcct tttttagctt gaccatccta atagatacac agtggtgtct cactgtgatt ttaatttgca ttttcctgct gactaattat gttgagcttg ttaccattta gacaacttca ttagagaagt gtctaatatt taggtgactt gcctgttttt ttttaattgg TICER Summer School, August 24th 2006© Slide provided by Carole Goble: University of Manchester

12 Using Workflows to Link Services
Describe the steps in a Scripting Language Steps performed by Workflow Enactment Engine Many languages in use Trade off: familiarity & availability Trade off: detailed control versus abstraction Incrementally develop correct process Sharable & Editable Basis for scientific communication & validation Valuable IPR asset Repetition is now easy Parameterised explicitly & implicitly TICER Summer School, August 24th 2006

13 TICER Summer School, August 24th 2006
Workflow Systems Language WF Enact. Comments Shell scripts Shell + OS Common but not often thought of as WF. Depend on context, e.g. NFS across all sites Perl Perl runtime Popular in bioinformatics. Similar context dependence – distribution has to be coded Java JVM Popular target because JVM ubiquity – similar dependence – distribution has to be coded BPEL BPEL Enactment OASIS standard for industry – coordinating use of multiple Web Services – low level detail - tools Taverna Scufl EBI, OMII-UK & MyGrid VDT / Pegasus Chimera & DAGman High-level abstract formulation of workflows, automated mapping towards executable forms, cached result re-use Kepler BIRN, GEON & SEEK TICER Summer School, August 24th 2006

14 TICER Summer School, August 24th 2006©
Workflow example Taverna in MyGrid “allows the e-Scientist to describe and enact their experimental processes in a structured, repeatable and verifiable way” GUI Workflow language Enactment engine TICER Summer School, August 24th 2006©

15 TICER Summer School, August 24th 2006©
Notification Pub/Sub for Laboratory data using a broker and ultimately delivered over GPRS Comb-e-chem: Jeremy Frey TICER Summer School, August 24th 2006©

16 Relevance to Digital Libraries
Similar concerns Data curation & management Metadata, discovery Secure access (AAA +) Provenance & data quality Local autonomy Availability, resilience Common technology Grid as an implementation technology TICER Summer School, August 24th 2006

17 TICER Summer School, August 24th 2006
What is Grid computing? TICER Summer School, August 24th 2006

18 TICER Summer School, August 24th 2006
What is a Grid? A grid is a system consisting of Distributed but connected resources and Software and/or hardware that provides and manages logically seamless access to those resources to meet desired objectives Web server License Handheld Server Supercomputer Workstation Cluster Data Center Database Printer R2AD Source: Hiro Kishimoto GGF17 Keynote May 2006 TICER Summer School, August 24th 2006

19 Virtualizing Resources
Access Web services Type-specific interfaces Computers Storage Sensors Applications Information Common Interfaces Resource-specific Interfaces Resources Hiro Kishimoto: Keynote GGF17 TICER Summer School, August 24th 2006

20 TICER Summer School, August 24th 2006
Ideas and Forms Key ideas Virtualised resources Secure access Local autonomy Many forms Cycle stealing Linked supercomputers Distributed file systems Federated databases Commercial data centres Utility computing TICER Summer School, August 24th 2006

21 Grid middleware services Virtualized resources
Job-Submit Service Registry Service Advertise Brokering Service Notify Virtualized resources CPU Resource Compute Service Data Service Application Service Printer Service Hiro Kishimoto: Keynote GGF17 TICER Summer School, August 24th 2006

22 TICER Summer School, August 24th 2006
Key Drivers for Grids Collaboration Expertise is distributed Resources (data, software licences) are location-specific Necessary to achieve critical mass of effort Necessary to raise sufficient resources Computational Power Rapid growth in number of processors Powered by Moore’s law + device roadmap Challenge to transform models to exploit this Deluge of Data Growth in scale: Number and Size of resources Growth in complexity Policy drives greater data availability TICER Summer School, August 24th 2006

23 Minimum Grid Functionalities
Supports distributed computation Data and computation Over a variety of hardware components (servers, data stores, …) Software components (services: resource managers, computation and data services) With regularity that can be exploited By applications By other middleware & tools By providers and operations It will normally have security mechanisms To develop and sustain trust regimes TICER Summer School, August 24th 2006

24 Grid & Related Paradigms
Utility Computing Computing “services” No knowledge of provider Enabled by grid technology Cluster Tightly coupled Homogeneous Cooperative working Distributed Computing Loosely coupled Heterogeneous Single Administration Grid Computing Large scale Cross-organizational Geographical distribution Distributed Management Source: Hiro Kishimoto GGF17 Keynote May 2006 TICER Summer School, August 24th 2006

25 TICER Summer School, August 24th 2006
Motives for Grids TICER Summer School, August 24th 2006

26 TICER Summer School, August 24th 2006
Why use / build Grids? Research Arguments Enables new ways of working New distributed & collaborative research Unprecedented scale and resources Economic Arguments Reduced system management costs Shared resources  better utilisation Pooled resources  increased capacity Load sharing & utility computing Cheaper disaster recovery TICER Summer School, August 24th 2006

27 TICER Summer School, August 24th 2006
Why use / build Grids? Operational Arguments Enable autonomous organisations to Write complementary software components Set up run & use complementary services Share operational responsibility General & consistent environment for Abstraction, Automation, Optimisation & Tools Political & Management Arguments Stimulate innovation Promote intra-organisation collaboration Promote inter-enterprise collaboration TICER Summer School, August 24th 2006

28 Grids In Use: E-Science Examples
Data sharing and integration Life sciences, sharing standard data-sets, combining collaborative data-sets Medical informatics, integrating hospital information systems for better care and better science Sciences, high-energy physics Simulation-based science and engineering Earthquake simulation Capability computing Life sciences, molecular modeling, tomography Engineering, materials science Sciences, astronomy, physics BLAST: Gene sequencing In bioinformatics, Basic Local Alignment Search Tool, or BLAST, is an algorithm for comparing biological sequences, such as the amino-acid sequences of different proteins or the DNA sequences. CHARMM – molecular dynamics CHARMM (Chemistry at HARvard Macromolecular Mechanics) is a force field for molecular dynamics as well as the name for the molecular dynamics simulation package associated with this force field. High-throughput, capacity computing for Life sciences: BLAST, CHARMM, drug screening Engineering: aircraft design, materials, biomedical Sciences: high-energy physics, economic modeling Source: Hiro Kishimoto GGF17 Keynote May 2006 TICER Summer School, August 24th 2006

29 TICER Summer School, August 24th 2006
Data Requirements TICER Summer School, August 24th 2006

30 Database Growth EMBL DB 111,416,302,701 nucleotides
PDB 33,367 Protein structures Slide provided by Richard Baldock: MRC HGU Edinburgh

31 Requirements: User’s viewpoint
Find Data Registries & Human communication Understand data Metadata description, Standard / familiar formats & representations, Standard value systems & ontologies Data Access Find how to interact with data resource Obtain permission (authority) Make connection Make selection Move Data In bulk or streamed (in increments) TICER Summer School, August 24th 2006

32 Requirements: User’s viewpoint 2
Transform Data To format, organisation & representation required for computation or integration Combine data Standard database operations + operations relevant to the application model Present results To humans: data movement + transform for viewing To application code: data movement + transform to the required format To standard analysis tools, e.g. R To standard visualisation tools, e.g. Spitfire TICER Summer School, August 24th 2006

33 Requirements: Owner’s viewpoint
Create Data Automated generation, Accession Policies, Metadata generation Storage Resources Preserve Data Archiving Replication Metadata Protection Provide Services with available resources Definition & implementation: costs & stability Resources: storage, compute & bandwidth TICER Summer School, August 24th 2006

34 Requirements: Owner’s viewpoint 2
Protect Services Authentication, Authorisation, Accounting, Audit Reputation Protect data Comply with owner requirements – encryption for privacy, … Monitor and Control use Detect and handle failures, attacks, misbehaving users Plan for future loads and services Establish case for Continuation Usage statistics Discoveries enabled TICER Summer School, August 24th 2006

35 TICER Summer School, August 24th 2006
Examples of Grid-based Data Management TICER Summer School, August 24th 2006

36 TICER Summer School, August 24th 2006
Large Hadron Collider The most powerful instrument ever built to investigate elementary particle physics Data Challenge: 10 Petabytes/year of data 20 million CDs each year! Simulation, reconstruction, analysis: LHC data handling requires computing power equivalent to ~100,000 of today's fastest PC processors VLBA Very Large Base Array - linked radio telescopes NRAO - National Radio Astronomy Observatory TICER Summer School, August 24th 2006

37 Composing Observations in Astronomy
No. & sizes of data sets as of mid-2002, grouped by wavelength 12 waveband coverage of large areas of the sky Total about 200 TB data Doubling every 12 months Largest catalogues near 1B objects Data and images courtesy Alex Szalay, John Hopkins TICER Summer School, August 24th 2006

38 GODIVA Data Portal Grid for Ocean Diagnostics, Interactive Visualisation and Analysis Daily Met Office Marine Forecasts and gridded research datasets National Centre for Ocean Forecasting ~3Tb climate model datastore via Web Services Interactive Visualisations inc. Movies ~ 30 accesses a day worldwide Other GODIVA software produces 3D/4D Visualisations reading data remotely via Web Services Online Movies

39 GODIVA Visualisations
Unstructured Meshes Grid Rotation/Interpolation GeoSpatial Databases v. Files (Postgres, IBM, Oracle) Perspective 3D Visualisation Google maps viewer

40 NERC Data Grid www.ndg.nerc.ac.uk
The DataGrid focuses on federation of NERC Data Centres Grid for data discovery, delivery and use across sites Data can be stored in many different ways (flat files, databases…) Strong focus on Metadata and Ontologies Clear separation between discovery and use of data. Prototype focussing on Atmospheric and Oceanographic data

41 Global In-flight Engine Diagnostics
in-flight data airline maintenance centre ground station global network eg SITA internet, , pager DS&S Engine Health Center data centre 100,000 aircraft 0.5 GB/flight 4 flights/day 200 TB/day Now BROADEN Significant in getting Boeing 787 engine contract Distributed Aircraft Maintenance Environment: Leeds, Oxford, Sheffield &York, Jim Austin

42 Data Grid Technologies
TICER Summer School, August 24th 2006

43 Storage Resource Manager (SRM)
de facto & written standard in physics, … Collaborative effort CERN, FNAL,  JLAB, LBNL and RAL Essential bulk file storage (pre) allocation of storage abstraction over storage systems File delivery / registration / access Data movement interfaces E.g. gridFTP Rich function set Space management, permissions, directory, data transfer & discovery TICER Summer School, August 24th 2006

44 Storage Resource Broker (SRB)
SDSC developed Widely used Archival document storage Scientific data: bio-sciences, medicine, geo-sciences, … Manages Storage resource allocation abstraction over storage systems File storage Collections of files Metadata describing files, collections, etc. Data transfer services TICER Summer School, August 24th 2006

45 Condor Data Management
Stork Manages File Transfers May manage reservations Nest Manages Data Storage C.f. GridFTP with reservations Over multiple protocols TICER Summer School, August 24th 2006

46 Globus Tools and Services for Data Management
GridFTP A secure, robust, efficient data transfer protocol The Reliable File Transfer Service (RFT) Web services-based, stores state about transfers The Data Access and Integration Service (OGSA-DAI) Service to access to data resources, particularly relational and XML databases The Replica Location Service (RLS) Distributed registry that records locations of data copies The Data Replication Service Web services-based, combines data replication and registration functionality Slides from Ann Chervenak TICER Summer School, August 24th 2006

47 RLS in Production Use: LIGO
Laser Interferometer Gravitational Wave Observatory Currently use RLS servers at 10 sites Contain mappings from 6 million logical files to over 40 million physical replicas Used in customized data management system: the LIGO Lightweight Data Replicator System (LDR) Includes RLS, GridFTP, custom metadata catalog, tools for storage management and data validation Slides from Ann Chervenak TICER Summer School, August 24th 2006

48 RLS in Production Use: ESG
Earth System Grid: Climate modeling data (CCSM, PCM, IPCC) RLS at 4 sites Data management coordinated by ESG portal Datasets stored at NCAR 64.41 TB in total files 1230 portal users IPCC Data at LLNL 26.50 TB in 59,300 files 400 registered users Data downloaded: TB in 263,800 files Avg. 300GB downloaded/day 200+ research papers being written Slides from Ann Chervenak TICER Summer School, August 24th 2006

49 gLite Data Management FTS LFC Replication Service AMGA
File Transfer Service LFC Logical file catalogue Replication Service Accessed through LFC AMGA Metadata services TICER Summer School, August 24th 20062nd EGEE Review, CERN - gLite Middleware Status

50 Data Management Services
FiReMan catalog Resolves logical filenames (LFN) to physical location of files and storage elements Oracle and MySQL versions available Secure services Attribute support Symbolic link support Deployed on the Pre-Production Service and DILIGENT testbed gLite I/O Posix-like access to Grid files Castor, dCache and DPM support Has been used for the BioMedical Demo Deployed on the Pre-Production Service and the DILIGENT testbed AMGA MetaData Catalog Used by the LHCb experiment TICER Summer School, August 24th 20062nd EGEE Review, CERN - gLite Middleware Status

51 File Transfer Service Gsiftp, SRM and SRM-copy support
Reliable file transfer Full scalable implementation Java Web Service front-end, C++ Agents, Oracle or MySQL database support Support for Channel, Site and VO management Interfaces for management and statistics monitoring Gsiftp, SRM and SRM-copy support Support for MySQL and Oracle Multi-VO support GridFTP and SRM copy support TICER Summer School, August 24th 20062nd EGEE Review, CERN - gLite Middleware Status

52 TICER Summer School, August 24th 2006
Commercial Solutions Vendors include: Avaki Data Synapse Benefits & costs Well packaged and documented Support Can be expensive But look for academic rates TICER Summer School, August 24th 2006

53 TICER Summer School, August 24th 2006
Data Virtualisation TICER Summer School, August 24th 2006

54 Data Integration Strategies
Use a Service provided by a Data Owner Use a scripted workflow Use data virtualisation services Arrange that multiple data services have common properties Arrange federations of these Arrange access presenting the common properties Expose the important differences Support integration accommodating those differences TICER Summer School, August 24th 2006

55 Data Virtualisation Services
Form a federation Set of data resources – incremental addition Registration & description of collected resources Warehouse data or access dynamically to obtain updated data Virtual data warehouses – automating division between collection and dynamic access Describe relevant relationships between data sources Incremental description + refinement / correction Run jobs, queries & workflows against combined set of data resources Automated distribution & transformation Example systems IBM’s Information Integrator GEON, BIRN & SEEK OGSA-DAI is an extensible framework for building such systems TICER Summer School, August 24th 2006

56 Virtualisation variations
Extent to which homogeneity obtained Regular representation choices – e.g. units Consistent ontologies Consistent data model Consistent schema – integrated super-schema DB operations supported across federation Ease of adding federation elements Ease of accommodating change as federation members change their schema and policies Drill through to primary forms supported TICER Summer School, August 24th 2006

57 TICER Summer School, August 24th 2006
OGSA-DAI A framework for data virtualisation Wide use in e-Science BRIDGES, GEON, CaBiG, GeneGrid, MyGrid, BioSimGrid, e-Diamond, IU RGRBench, … Collaborative effort NeSC, EPCC, IBM, Oracle, Manchester, Newcastle Querying of data resources Relational databases XML databases Structured flat files Extensible activity documents Customisation for particular applications TICER Summer School, August 24th 2006

58 TICER Summer School, August 24th 2006
OGF Open Grid Services Architecture TICER Summer School, August 24th 2006

59 The Open Grid Services Architecture
An open, service-oriented architecture (SOA) Resources as first-class entities Dynamic service/resource creation and destruction Built on a Web services infrastructure Resource virtualization at the core Build grids from small number of standards-based components Replaceable, coarse-grained e.g. brokers Customizable Support for dynamic, domain-specific content… …within the same standardized framework Hiro Kishimoto: Keynote GGF17 TICER Summer School, August 24th 2006

60 Web services foundation
OGSA Capabilities Execution Management Job description & submission Scheduling Resource provisioning Data Services Common access facilities Efficient & reliable transport Replication services Resource Management Discovery Monitoring Control OGSA Self-Management Self-configuration Self-optimization Self-healing Information Services Registry Notification Logging/auditing Security Cross-organizational users Trust nobody Authorized access only OGSA “profiles” Web services foundation Hiro Kishimoto: Keynote GGF17 TICER Summer School, August 24th 2006

61 TICER Summer School, August 24th 2006
Basic Data Interfaces Storage Management e.g. Storage Resource Management (SRM) Data Access ByteIO Data Access & Integration (DAI) Data Transfer Data Movement Interface Specification (DMIS) Protocols (e.g. GridFTP) Replica management Metadata catalog Cache management Hiro Kishimoto: Keynote GGF17 TICER Summer School, August 24th 2006

62 TICER Summer School, August 24th 2006
Challenges TICER Summer School, August 24th 2006

63 TICER Summer School, August 24th 2006
The State of the Art Many successful Grid & E-Science projects A few examples shown in this talk Many Grid systems All largely incompatible Interoperation talks under way Standardisation efforts Mainly via the Open Grid Forum A merger of the GGF & EGA Significant user investment required Few “out of the box” solutions TICER Summer School, August 24th 2006

64 TICER Summer School, August 24th 2006
Technical Challenges Issues you can’t avoid Lack of Complete Knowledge (LOCK) Latency Heterogeneity Autonomy Unreliability Scalability Change A Challenging goal balance technical feasibility against virtual homogeneity, stability and reliability while remaining affordable, manageable and maintainable TICER Summer School, August 24th 2006

65 Areas “In Development”
Data provenance Quality of Service Service Level Agreements Resource brokering Across all resources Workflow scheduling Co-sheduling Licence management Software provisioning Deployment and update Other areas too! TICER Summer School, August 24th 2006

66 Operational Challenges
Management of distributed systems With local autonomy Deployment, testing & monitoring User training User support Rollout of upgrades Security Distributed identity management Authorisation Revocation Incident response TICER Summer School, August 24th 2006

67 Grids as a Foundation for Solutions
The grid per se doesn’t provide Supported e-Science methods Supported data & information resources Computations Convenient access Grids help providers of these, via International & national secure e-Infrastructure Standards for interoperation Standard APIs to promote re-use But Research Support must be built Application developers Resource providers TICER Summer School, August 24th 2006

68 Collaboration Challenges
Defining common goals Defining common formats E.g. schemas for data and metadata Defining a common vocabulary E.g. for metadata Finding common technology Standards should help, eventually Collecting metadata Automate where possible TICER Summer School, August 24th 2006

69 TICER Summer School, August 24th 2006
Social Challenges Changing cultures Rewarding data & resource sharing Require publication of data Taking the first steps If everyone shares, everyone wins The first people to share must not lose out Sustainable funding Technology must persist Data must persist TICER Summer School, August 24th 2006

70 TICER Summer School, August 24th 2006
Summary & Conclusions TICER Summer School, August 24th 2006

71 TICER Summer School, August 24th 2006
Summary E-Science exploits distributed computing resource to enable new discoveries, new collaborations and new ways of working Grid is an enabling technology for e-science. Many successful projects exist Many challenges remain TICER Summer School, August 24th 2006

72 UK e-Science EGEE, ChinaGrid e-Science Institute Globus Alliance
Digital Curation Centre Open Middleware Infrastructure Institute National Centre for e-Social Science Grid Operations Support Centre This shows the National Centre, 8 regional centres and 2 laboratories in blue That is the original set up at the start of UK e-Science August 2001 NeSC is jointly run by Edinburgh & Glasgow Universities In 2003 several smaller centres were added (vermilion) The e-Science Institute is run by the National e-Science Centre. It runs a programme of events and hosts visiting international researchers. It was established in 2001. The Open Middleware Infrastructure Institute was established in 2004, to provide support and direction for Grid middleware developed in the UK. It is based at the University of Southampton. The Grid Operations Support Centre was established in 2004. The Digital Curation Centre was established in 2004 by the Universities of Edinburgh and Glasgow, the UK Online Library Network at the University of Bath, and the Central Laboratories at Daresbury and Rutherford. It’s job is to provide advice on curating scientific data and on preserving digital media, formats, and access software. Edinburgh is one of the 4 founders of the Globus Alliance (Sept 2003) which will take responsibility for the future of the Globus Toolkit. The other founders are: Chicago University (Argonne National Lab), University of Southern California, Los Angeles (Information Sciences Institute) and the PDC, Stockholm, Sweden The EU EGEE project (Enabling Grids for E-Science in Europe) is establishing a common framework for Grids in Europe. The UK e-Science programme has several connections with EGEE. NeSC leads the training component for the whole of Europe. National Institute for Environmental e-Science CeSC (Cambridge) EGEE, ChinaGrid TICER Summer School, August 24th 2006

73 TICER Summer School, August 24th 2006
Questions & Comments TICER Summer School, August 24th 2006


Download ppt "TICER Summer School, August 24th 2006"

Similar presentations


Ads by Google