OxGrid, A Campus Grid for the University of Oxford Dr. David Wallom.

Slides:



Advertisements
Similar presentations
The Storage Resource Broker and.
Advertisements

The Storage Resource Broker and.
3rd Campus Grid SIG Meeting. Agenda Welcome OMII Requirements document Grid Data Group HTC Workshop Research Computing SIG? AOB Next meeting (AG)
Peter Berrisford RAL – Data Management Group SRB Services.
Distributed Data Processing
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
UK Campus Grid Special Interest Group Dr. David Wallom University of Oxford.
Data Grid: Storage Resource Broker Mike Smorul. SRB Overview Developed at San Diego Supercomputing Center. Provides the abstraction mechanisms needed.
High Performance Computing Course Notes Grid Computing.
The Community Authorisation Service – CAS Dr Steven Newhouse Technical Director London e-Science Centre Department of Computing, Imperial College London.
Dr. David Wallom Experience of Setting up and Running a Production Grid on a University Campus July 2004.
Dr. David Wallom Use of Condor in our Campus Grid and the University September 2004.
Technology on the NGS Pete Oliver NGS Operations Manager.
1 Pertemuan 13 Servers for E-Business Matakuliah: M0284/Teknologi & Infrastruktur E-Business Tahun: 2005 Versi: >
1-2.1 Grid computing infrastructure software Brief introduction to Globus © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid computing course. Modification.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
Cloud computing Tahani aljehani.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
WINDOWS APPLICATIONS by Jane Cable Also called Accessories Also called Components.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
6/1/2001 Supplementing Aleph Reports Using The Crystal Reports Web Component Server Presented by Bob Gerrity Head.
Oxford Interdisciplinary e-Research Centre I e R C OxGrid, A Campus Grid for the University of Oxford Dr. David Wallom Campus Grid Manager.
National Computational Science National Center for Supercomputing Applications National Computational Science MyProxy: An Online Credential Repository.
DIRAC Web User Interface A.Casajus (Universitat de Barcelona) M.Sapunov (CPPM Marseille) On behalf of the LHCb DIRAC Team.
Current Job Components Information Technology Department Network Systems Administration Telecommunications Database Design and Administration.
File-Mate 1500 Design Review II
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Robert Fourer, Jun Ma, Kipp Martin Copyright 2006 An Enterprise Computational System Built on the Optimization Services (OS) Framework and Standards Jun.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Installation and Development Tools National Center for Supercomputing Applications University of Illinois at Urbana-Champaign The SEASR project and its.
Computer Emergency Notification System (CENS)
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
Introduction to the Adapter Server Rob Mace June, 2008.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
The Global Land Cover Facility is sponsored by NASA and the University of Maryland.The GLCF is a founding member of the Federation of Earth Science Information.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
Core 3: Communication Systems. Network software includes the Network Operating Software (NOS) and also network based applications such as those running.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Introduction to The Storage Resource.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
1 3 Computing System Fundamentals 3.3 Computer Systems.
Biomedical Informatics Research Network The Storage Resource Broker & Integration with NMI Middleware Arcot Rajasekar, BIRN-CC SDSC October 9th 2002 BIRN.
An Introduction to Campus Grids 19-Apr-2010 Keith Chadwick & Steve Timm.
Rights Management for Shared Collections Storage Resource Broker Reagan W. Moore
R. Krempaska, October, 2013 Wir schaffen Wissen – heute für morgen Controls Security at PSI Current Status R. Krempaska, A. Bertrand, C. Higgs, R. Kapeller,
The Storage Resource Broker and.
The National Grid Service User Accounting System Katie Weeks Science and Technology Facilities Council.
HNC COMPUTING - Network Concepts 1 Network Concepts Network Concepts Network Operating Systems Network Operating Systems.
The OxGrid Resource Broker David Wallom. Overview OxGrid Resource Broking Why build our own Job Submission and other tools Future developments.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Building Preservation Environments with Data Grid Technology Reagan W. Moore Presenter: Praveen Namburi.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 22 February 2006.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre
1 eScience Grid Environments th May 2004 NESC - Edinburgh Deployment of Storage Resource Broker at CCLRC for E-science Projects Ananta Manandhar.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Enabling Grids for E-sciencE Claudio Cherubino INFN DGAS (Distributed Grid Accounting System)
FermiGrid The Fermilab Campus Grid 28-Oct-2010 Keith Chadwick Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359.
Data services on the NGS
Grid Computing.
TYPES OF SERVER. TYPES OF SERVER What is a server.
LO3 – Understand Business IT Systems
Presentation transcript:

OxGrid, A Campus Grid for the University of Oxford Dr. David Wallom

Outline Why make a campus grid? How we are making it? –Computational capability –Data capability

What is a Grid? Single sign-on to multiple resources located in different administrative domains. A Virtual Organisation of users that spans physical organisational boundaries.

Why a grid? Many new problems in research have a need for access to massive computational and data capacity, capability limiting, If the need is too large for a single existing resource, construct a system able to concurrently use a number of appropriate resources, Designed so that; –use single sign-on to access multiple resources and switch between each seamlessly –layout can be dynamically altered without user interference –once data placed on, or a job started on, a remote resource, its status is monitored to make sure it stays running/available!

Why make a campus grid? Many computers throughout the University under-utilised: –PCs Idle time (about 16hr/day for an average desktop) Unused disk space (~60% of a modern hard-drive) already purchased – depreciating daily Readily available resource, e.g. OULS has up to 1200 desktop computers. –Large Servers expensive to purchase, house and run (extra staff). Rarely 100% utilised

OxGrid, a University Campus Grid Single entry point for Oxford users to shared and dedicated resources Seamless access to National Grid Service and OSC for registered users Single sign-on using general UK e-Science network of integrated with current methods National Grid Service Oxford Supercomputing Centre OxGrid Central Management Computational task Distribution Storage Management College Resources Departmental Resources Oxford Users

Authorisation And Authentication Initially use the standard UK e-Science Certification Authority –X509 digital certificates issued on a per user basis. –OUCS is a Registration Authority for this CA For users that only wish to access internal (university) resources, a Kerberos CA has been installed, controlled by the Oxford central Kerberos system (Herald username) Both stored in repository to minimise human - certificate interaction.

Central System Components Computational Task Distribution: –Resource Broker, user access and distribution of submitted tasks –Information Service, all system capability and status information on which the resource broker makes decisions –Systems monitoring, graphical presentation of monitoring system for helpdesk interface –User Management, control a virtual community and allow access to various resources –Accounting Service, allow full system and single resource use can be recorded and charged for Storage Management –Create a dynamic multi-homed virtual file system Single central controller & large file-store for immediate access Connected to remote file-systems for access to larger storage capability –Provide metadata mark-up for improved data mining

Virtual Organisation/User Management & Accounting Grid Security Interface uses a mapping between Distinguished Name (DN) as defined in a Digital Certificate and local usernames on each resource. –Important for each resource a user is expecting to use, his DN is mapped locally. OxVOM –Custom in-house designed Web based user interface –Persistent information stored in relational database –User DN list retrieved by remote resources using standard tools Accounting is the basis of a possible charging model

Computational Resources Core, accessible to all Campus Grid users –Individual Departmental Clusters (dedicated compute resources) –Condor clusters of PCs (cycle scavenging) External, accessible to users that have registered with them –National Grid Service –OSC

Environmentally aware Condor systems Increasingly system owners shutdown machines that are not being used. –Save electricity Develop a scheme to still use these systems within OxGrid –Take advantage of Wake-On-LAN technology. –Automate load balancing to start and stop worker nodes as necessary.

Data Management Engagement of data as well as computationally intensive research groups Provide a remote store for those groups that cannot resource their own Distribute the client software as widely as possible, including departments that are not currently engaged in e-Research

Data Management Software for creation of system –Storage Resource Broker to create large virtual datastore Through central metadata catalogue users interface with single virtual file system though physical volumes may be on several network resources In built metadata capability

SRB Architecture MCAT Disk Server1 Disk Server2 Disk Server3 Mcat Server USER Disk Server4

SRB as a Data Grid SRB MCAT DB SRB Data Grid has arbitrary number of servers Complexity is hidden from users

SRB Client Implementations inQ – Window GUI browser Jargon – Java SRB client classes –Pure Java implementation mySRB – Web based GUI –run using web browser Matrix – Web service for SRB work flow All of these allow direct interaction with the data-grid

How users interact with OxGrid Log in to system head node (Resource Broker) Create digital credential Use ‘job-submission’ script to create and submit jobs onto Condor-G system.

Users Installed several example applications –Graphics rendering

Use of Computing Power in the Humanities

Users Installed several example applications –Graphics rendering –Physics –Biochemistry Computational Users –Chemistry & Materials Science Data Users –IBVRE Contacting currently registered users of both OSC as well as NGS. –Beneficial to these systems to remove users that don’t need to be there to provide more capability to those that must be there. Data provision is an integral component of the grid –Starting to contact large data users

Conclusions Users are already able to log onto the Resource Broker and schedule work onto the NGS, OSC and OUCS Condor Systems We are working as quickly as possible to engage more users We need these users to then go out and evangelise to bring in both more users and resource.

Contact Telephone: