GLIF Linking the Globe with LIGHT Gigi Karmous-Edwards Principal Scientist MCNC APAN 2008, Hawaii Aloha Kakahiaka !

Slides:



Advertisements
Similar presentations
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Advertisements

GLIF Global Lambda Integrated Facility Kees Neggers CCIRN Cairns Australia, 3 July 2004.
Erik-Jan Bos GLIF Tech co-chair GLIF, the Global Lambda Integrated Facility CCIRN meeting Xian, China August 26, 2007 GLIF Tech (and other WGs)
APAN 24th Meeting, Xi’an, China – August 28, 2007 Hybrid Networking What’s in GLIF for you? Erik-Jan Bos, SURFnet, The Netherlands.
Network Services and International Collaboration December 11 th 2014, Muscat Alexander van den Hil
Global Lambda Integrated Facility (GLIF) Kees Neggers SURFnet Internet2 Fall meeting Austin, TX, 27 September 2004.
GLIF tech update UCSD La Jolla, 29 September 2005
Optical networking research in Amsterdam Paola Grosso UvA - AIR group.
Kees Neggers Managing Director SURFnet GLIF, the Global Lambda Integrated Facility TERENA Networking Conference 6-9 June 2005, Poznan, Poland.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
May TNC2007 Network Description Language - Semantic Web for Hybrid Networks Network Description Language: Semantic Web for Hybrid Networks Paola.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Systems Architecture, Fourth Edition1 Internet and Distributed Application Services Chapter 13.
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
Cloud Computing – Issues, Research and Implementations
GLIF Engineering (TEC) Working Group & SURFnet6 Blue Print Erik-Jan Bos Director of Network Services, SURFnet I2 Fall meeting, Austin, TX, USA September.
SURFnet and the LHC Erik-Jan Bos Director of Network Services, SURFnet Co-chair of GLIF TEC LHC T0/1 Network Meeting, Amsterdam January 21, 2005.
APAN 24th Meeting, Xi’an, China – August 27, 2007 NetherLight update Lambda networking in The Netherlands, Europe Erik-Jan Bos, SURFnet.
Global Connectivity Joint venture of two workshops Kees Neggers & Dany Vandromme e-IRG Workshop Amsterdam, 13 May 2005.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
ICFA HEP Grid and Digital Divide Workshop May 2005 Daegu, Korea George McLaughlin Director, International Developments AARNet Kees Neggers Managing.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
The STScI Advanced Computing and Testing Laboratory.
DISTRIBUTED COMPUTING
LambdaGRID the NREN (r)Evolution Kees Neggers Managing Director SURFnet Reykjavik, 26 August 2003.
Kees Neggers SURFnet SC2003 Phoenix, 20 November 2003.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
GLIF Global Lambda Integrated Facility Maxine Brown Electronic Visualization Laboratory University of Illinois at Chicago.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
1 Introduction to Microsoft Windows 2000 Windows 2000 Overview Windows 2000 Architecture Overview Windows 2000 Directory Services Overview Logging On to.
Bandwidth-on-Demand evolution Gerben van Malenstein Fall 2011 Internet2 Member Meeting Raleigh, North Carolina, USA – October 3, 2011.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
GLIF Infrastructure Kees Neggers SURFnet SC2004 Pittsburgh, PA 12 November 2004.
Techs in Paradise 2004, Honolulu / Lambda Networking BOF / Jan 27 NetherLight day-to-day experience APAN lambda networking BOF Erik Radius Manager Network.
SURFnet. We make innovation work0. 1 State-of-the-art Network IT InnovationLicensing.
John Dyer TF-PR, Paris, September Optical Networking Primer John Dyer TERENA
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
© 2006 Open Grid Forum Network Monitoring and Usage Introduction to OGF Standards.
GigaPort NG Network SURFnet6 and NetherLight Erik-Jan Bos Director of Network Services, SURFnet GDB Meeting, SARA&NIKHEF, Amsterdam October 13, 2004.
7. Grid Computing Systems and Resource Management
For WSIS 2003, CERN and the International Center for Advanced Internet Research (iCAIR) designed several demonstrations of next generation.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
3/12/2013Computer Engg, IIT(BHU)1 CLOUD COMPUTING-1.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
Fall 2006 I2 Member Meeting Global Control of Research Networks Gigi Karmous-Edwards International task Force panel.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
SCARIe: using StarPlane and DAS-3 Paola Grosso Damien Marchel Cees de Laat SNE group - UvA.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Grid Optical Burst Switched Networks
SURFnet6: the Dutch hybrid network initiative
Dynamic Network Services In Internet2
Grid Computing.
The SURFnet Project Bram Peeters, Manager Network Services
University of Technology
Agenda Global Lambda Integrated Facility (GLIF) Function of GOLE’s
Global Lambda Integrated Facility
GLIF Global Lambda Integrated Facility
Presentation transcript:

GLIF Linking the Globe with LIGHT Gigi Karmous-Edwards Principal Scientist MCNC APAN 2008, Hawaii Aloha Kakahiaka !

1.What is GLIF? 2.Why does GLIF exists? 3.How Does GLIF function? 4.What has GLIF accomplished? 5.Virtualization 6.The Many Challenges ahead 7.Conclusions Agenda

Global Lambda Integrated Facility GLIF is an international virtual organization that promotes the paradigm of lambda networking GLIF participants jointly make lambdas available as an integrated global facility for use in data-intensive research GLIF brings together leading networking engineers and researchers worldwide, who collaborate to identify and solve challenges for a Global facility What is GLIF?

GLIF is an international virtual organization managed as a cooperative activity with ‘participants’ rather than ‘members’ with a lightweight governance structure. Open to anybody sharing the vision of optical interconnection of different facilities, who voluntarily contributes network resources (e.g. equipment, lambdas) or actively participates in relevant activities. Please join the mailing list if you have an interest in being part of the solution for facilitating global lambda networks for research and education. What is GLIF?

GLIF …. More resources are now available, next version in two weeks!

Researchers need to do their work globally E-science: global, large scale scientific collaborations enabled through distributed computational and communication infrastructure Combines scientific instruments and sensors, distributed data archives, computing resources and visualization to solve complex scientific problems In physics, molecular biology, environmental, Health, Entertainment, etc. Future - this facility will be useful for K-20 education not just E-Scientist Why GLIF exists? … E-science

Korea’s HVEM One of a kind in the world Provide global access to unique instruments for the purpose of advancing science for humanity WEB service interface High capacity optical network for output Developing a Global E-science Laboratory (GEL) Viewing the real-time video from the CCD camera Accessing or manipulating the 2-D or 3-D images Generating the workflow specification and requesting the workflow to be executed Searching the images or video files, papers, and experiments in the databases or storages Hyuck Han, Hyungsoo Jung, Heon Y. Yeom, Hee S. Kweon, and Jysoo Lee ”HVEM Grid: Experiences in Constructing an Electron Microscopy Grid”

Need high Capacity - 1Gbs - 10Gbs or more Need QoS - difficult to guarantee w/ routed network Cannot disrupt current users with their large flows So… We need Hybrid Networking (IP + lambda networking) Lightpath: high quality and high capacity optical end-to-end network connection Lightpaths provide applications with dedicated bandwidth with fixed characteristics at relatively low costs and with added security Accommodating Researchers

September 2001: first Lambda Workshop in Amsterdam followed by open Lambda Workshop organized by TERENA Second Lambda Workshop in 2002 in Amsterdam was attached to iGrid2002, hosted by Science Park Amsterdam August 2003: third Lambda Workshop in Reykjavik hosted by NORDUnet and attached to the NORDUnet 2003 Conference -GLIF name created The GLIF Story …

There are Four working groups: –Governance –Research and Applications –Technical –Control Plan Secretariat functions by TERENA Holds Annual meeting –Next Meeting - 8th Annual Global LambdaGrid Workshop, Seattle, USA, 1-2 October 2008 Tech and Control working groups also hold semi- annual meetings (past weekend) How GLIF functions?

Governance and Growth (GOV) Working Group Chair: Kees Neggers (SURFnet) Goals: To identify future goals in terms of lambdas, connections and applications support, and to decide what cross-domain policies need to be put in place. Research and Applications (RAP) Working Group Chair: Maxine Brown (UIC) & Larry Smarr (UCSD) Goals: To train a new generation of scientists on the use of super-networks. GLIF Working Groups

Technical Issues (Tech) Working Group Co-Chairs: Erik-Jan Bos (SURFnet) & René Hatem (CANARIE) Goals: To design and implement an international LambdaGrid infrastructure, identify which equipment is being used, what connection requirements are required, and which functions and services should be provided. Control Plane and Grid Integration Middleware Working Group Chair: Gigi Karmous-Edwards (MCNC) Goals: To agree on the interfaces and protocols that talk to each other on the control planes of the contributed Lambda resources. GLIF Working Groups

Documented enabling technologies (middleware, control plane software) and what applications they enable (e.g., DRAGON, UCLP, etc) Documented countries’ activities (feedback to NRENs) Helped applications get started Provides a resource for groups trying to get funding for GLIF- related activities; GLIF “branding” adds credibility Document applications (brief descriptions with URL pointers) (I will create template and forward to RAP list) Developed a GLIF primer (how to find, educate, promote applications) Provided PR: What can GLIF do for you? Provided PR: Promote domain-specific applications (eVLBI, CineGrid, etc) (provide inspiration and motivation to potential new applications within countries) GLIF RAP working group Accomplishments

Chairs: Erik-Jan Bos and Rene Hatem, Secretary: Kevin Meynell Developed concept of GOLEs Documented in a centralized database all technical information on contributed resources Developed best practices and issues document for Hybrid Networking Developed best practices document for fault resolutions Hold monthly resource update calls Share Open source toolkits such as TL1 toolkit And more… GLIF Tech working group Accomplishments

GLIF Open Lightpath Exchanges GLIF lambdas are interconnected through established lightpath exchange points known as GOLEs GOLEs are comprised of equipment capable of terminating lambdas and performing lightpath switching, allowing end-to-end connections GOLEs have an open connection policy GOLES

GOLES, example of a GOLE, NetherLight

AMPATH - Miami CERN - Geneva CzechLight - Prague HKOEP - Hong Kong KRLight - Daejoen MAN LAN - New York MoscowLight - Moscow NetherLight - Amsterdam NGIX-East - Washington DC NorthernLight - Copenhagen Current GLIF Resources Pacific Wave (Los Angeles) PacificWave (Seattle) PacificWave(Sunnyva le) StarLight - Chicago T-LEX - Tokyo TaiwanLight - Taipei UKLight - London AARNet, US LHCNet

Chair: Gigi Karmous-Edwards, Sectretary: Licia Florio Virtualization of Networking resources as well as other key resources (compute, storage, instruments, etc) via “on- demand” and “advanced reservations” Agreed to adopt Network Description Language (NDL) based on RDF Work closely with two OGF working groups for standardization –Grid High Performance Networking wg –Network Markup Language wg Shared current research experiments and open source code for controlling lightpaths Developed an architecture for next generation lambda resources coordinated with other key resources Agreed to focus on Generic Network Interface (GNI) Comparing existing APIs similar to GNI Will have an initial GNI specification by October meeting GLIF Control Plane and Grid Middleware Integration wg

Keep it Simple and Smart! Akamai KISS

GLIF Grid Resource Registry RB-A NRM-A Network-A RB-B NRM-B Network-B GNI GAI CRM-A IRM-B Resource Registry GCI Grid Administrative Domain - A RB: Resource Broker DNRM: Domain Network Resource Manager CRM: Compute Resource Manager IRM: Instrument Resource Manager SRM: Storage Resource Manager User GNI GAI: Grid Application Interface GNI: Grid Network Interface GCI: Grid Compute Interface GSI: Grid Storage Interface GII: Grid Instrument Interface SRM-A CRM-B GCI Resource Registry Grid Administrative Domain - B GCI GNI Publish Resource Information Publish/Subscribe Broker + Resource Information / References GII GNI

RB Publish/Subscribe GAI GSI, GII, GxI, etc Publish/ Subscribe Multi-domain Path Computation RMs HARC RMs - Fault Mgmt - Performance Resource Meta-scheduler Resource Registry Security/AAA Policy Engine Request Processor Monitoring Discovery Static Information (Policy, etc) GLIF Grid Resource Registry Resource Co-allocation HARC Acceptors GNI

NRM Publish Information to ERB Topology/ Discovery Monitoring Discovery GNI Static Information (Policy, etc) Path Computation Reservation timetable Resource Allocation Publish/ Subscribe Network Management: - Fault Mgmt - Performance e.g. TL1, SNMP, XML,MDS, etc. Security/AAA Policy Engine Request Processor Resource Repository e.g. TL1, SNMP

MCNC experimenting with new Virtual Compute Services for NC’s K-20 community Reservation and Provisioning system –Allocates nodes to users on a reservation basis –Can be now (on-demand) or future (schedule in advance) –Can allocate both single nodes and clusters of nodes –Reservation lengths are policy driven selection of 1-4 hours Or open end time allow a month or more NCSU’s Virtual Computing Lab (VCL) vcl.ncsu.edu

Will host for NCSU 1000 nodes at MCNC this year Pilots are under way with K-20 type users IBM BladeCenter Blade Servers Housed in a datacenter - IBM’s Energy efficiency doors Standalone workstations Housed anywhere; we include our lab machines when the labs are closed Working on Sun Blade servers –VCI partners are working Dell and HP blades –Can easily be moved between HPC cluster and VCL system –We move nodes to HPC during student breaks Virtual Computing Lab

Hardware Blades, servers, desktops, storage OS: Apps WinLinux Other … Virtual Layer OS:WinLinux … Apps e.g., Web Sphere e.g., Web Sphere … RDP, VNC, … e.g., VMWare, XEN, MSVS2500,.. X-Win Client Apps. Work Flow Services End-User Access Vis Services Other … Middleware e.g. LSF VCL Manager “Application” Image Stack xCAT VCL code IBM TM WebServer DataBase Etc. Users “Images” H/W Resources Undifferentiated Local or distributed Differentiator: User to Image to Resource Mapping, Management & Provenance Simplicity, Flexibility, Reliability Scalability, Economy Image

About 1000 blades (cca 140 used for VCL individual seats, the rest for VCL HPC cycles), plus several hundred idle student laboratory machines. Environment base-lines are typically Windows and Linux with a variety of applications. Depending on how demanding an application is, service may be virtualized (VMWare) or bare-metal. About 70,000 single-seat image reservations per semester. Fall 2007, peaked at about 2,500 reservations per day. Serving population of 30,000 students (in a semester there may be about 6,000 unique users). Most of the “individual seat” requests are on- demand “Now” reservations: cca 90% of requests System availability: about 99% Some Stats

Issues and Challenges Key Challenges with Hybrid networking - effect on IP while having dynamic lambdas Coordination of network resources and other Grid resources Two phase commit for all involved resources - KISS Topology Abstractions - including end points - or services Monitoring - MonALISA, PerfSONAR…. Advertising resources globally - agree on what and how to represent resources… NDL etc. Policy Different implementations of each component (no need to standardize on how things are done - just interfaces) Agree on Functional components Focus on a couple of KEY interfaces (low set of options - use lowest common denominator) Prioritize - GNI …

Conclusions

A Global Integrated Facility is necessary for the support of both Scientific Research, Education, and networking research. Everyday there are more requests for use and more resources contributed. GLIF currently behaves as a Global collaborative testbed Our goal is to provide Global virtualization of shared resources, including network lambdas, compute, storage, instruments, etc. Next Generation Networks will be a hybrid of of routed and lambda switched networks. (not just for high-end research) The Research networks (NRENs and Gov sponsored testbeds) are taking these bold steps on GLIF, testbed infrastructures… apply lessons learned to production quickly. International Collaboration is a very Key ingredient for the future of Scientific discovery and education - The Optical network plays the most critical role in achieving this!

Mahalo Gigi Karmous-Edwards APAN 2008