Deconstructing PLC PlanetLab Developer’s Meeting May 13-14, 2008 Larry Peterson.

Slides:



Advertisements
Similar presentations
PlanetLab Workshop May 12, Incentives Private PLs are happening… What direction for “public” PL? –Growth? Distributing ops? Incentives to move in.
Advertisements

Resource specification update for PlanetLab and VINI Andy Bavier Princeton University March 16, 2010.
PlanetLab Architecture Larry Peterson Princeton University.
Sponsored by the National Science Foundation 1 Activities this trimester 0.5 revision of Operational Security Plan Independently (from GPO) developing.
SFI Tutorial Tony Mack. What is SFI SFI: the the command line client for SFA interfaces. SFA: minimal set of interfaces and data types that permit the.
PlanetLab Operating System support* *a work in progress.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved Managing Server.
1 Use ProtoGENI CS606, Xiaoyan Hong University of Alabama.
GENI Architecture Global Environment for Network Innovations The GENI Project Office (GPO) March 2, 2008 – GEC #2 Architecturewww.geni.net1 Clearing house.
ITIL: Service Transition
Haga clic para cambiar el estilo de título Haga clic para modificar el estilo de subtítulo del patrón DIRAC Framework A.Casajus and R.Graciani (Universitat.
Sponsored by the National Science Foundation GENI Clearinghouse Panel GEC 12 Nov. 2, 2011 INSERT PROJECT REVIEW DATE.
PlanetLab Federation Development Aaron Klingaman Princeton University.
DESIGNING A PUBLIC KEY INFRASTRUCTURE
Folie 1 Service Oriented Architecture - Prototyping study - DLR/GSOC Author: S.Gully.
1 DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 3 Processes Skip
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
NextGRID & OGSA Data Architectures: Example Scenarios Stephen Davey, NeSC, UK ISSGC06 Summer School, Ischia, Italy 12 th July 2006.
Network Rspecs in PlanetLab and VINI Andy Bavier PL Developer's Meeting May 13-14, 2008.
NFS. The Sun Network File System (NFS) An implementation and a specification of a software system for accessing remote files across LANs. The implementation.
Introduction to UDDI From: OASIS, Introduction to UDDI: Important Features and Functional Concepts.
WP6: Grid Authorization Service Review meeting in Berlin, March 8 th 2004 Marcin Adamski Michał Chmielewski Sergiusz Fonrobert Jarek Nabrzyski Tomasz Nowocień.
DIRAC Web User Interface A.Casajus (Universitat de Barcelona) M.Sapunov (CPPM Marseille) On behalf of the LHCb DIRAC Team.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED.
Andy Bavier, PlanetWorks Scott Baker, SB-Software July 27, 2011.
Chapter Oracle Server An Oracle Server consists of an Oracle database (stored data, control and log files.) The Server will support SQL to define.
Sponsored by the National Science Foundation PlanetLab and PLFED Spiral 2 Year-end Project Review Princeton University PI: Larry Peterson Staff: Andy Bavier,
Resource Management and Accounting Working Group Working Group Scope and Components Progress made Current issues being worked Next steps Discussions involving.
Digital Object Architecture
1 GENI Operational Security GEC4 Stephen Schwab Miami, Florida.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting June 13-14, 2002.
1 Apache. 2 Module - Apache ♦ Overview This module focuses on configuring and customizing Apache web server. Apache is a commonly used Hypertext Transfer.
PlanetLab Applications and Federation Kiyohide NAKAUCHI NICT 23 rd ITRC Symposium 2008/05/16 Aki NAKAO Utokyo / NICT
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
GEC3www.geni.net1 GENI Spiral 1 Control Frameworks Global Environment for Network Innovations Aaron Falk Clearing.
GEC5 Security Summary Stephen Schwab Cobham Analytical Services July 21, 2009.
Component frameworks Roy Kensmil. Historical trens in software development. ABSTRACT INTERACTIONS COMPONENT BUS COMPONENT GLUE THIRD-PARTY BINDING.
1 Schema Registries Steven Hughes, Lou Reich, Dan Crichton NASA 21 October 2015.
D u k e S y s t e m s ABAC: An ORCA Perspective GEC 11 Jeff Chase Duke University Thanks: NSF TC CNS
Sponsored by the National Science Foundation GEC16 Plenary Session: GENI Solicitation 4 Tool Context Marshall Brinn, GPO March 20, 2013.
Sponsored by the National Science Foundation Enabling Trusted Federation Marshall Brinn, GENI Program Office October 1, 2014.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Access Control for Federation of Emulab-based Network Testbeds Ted Faber, John Wroclawski 28 July 2008
Yang Shi (Richard), Yong Zhang IETF 74 th 26 March 2009, San Francisco CAPWAP WG MIB Drafts Report.
Netprog: Corba Object Services1 CORBA 2.0 Object Services Ref: The Essential Distributed Objects Survival Guide: Orfali, Harky & Edwards.
David Adams ATLAS DIAL/ADA JDL and catalogs David Adams BNL December 4, 2003 ATLAS software workshop Production session CERN.
Sponsored by the National Science Foundation Introduction to GENI Architecture: Federated Trust Perspective Marshall Brinn, GPO GEC20: June 24, 2014.
Sponsored by the National Science Foundation GENI Aggregate Manager API Tom Mitchell March 16, 2010.
Plug-in Architectures Presented by Truc Nguyen. What’s a plug-in? “a type of program that tightly integrates with a larger application to add a special.
Sponsored by the National Science Foundation Establishing Policy-based Resource Quotas at Software-defined Exchanges Marshall Brinn, GPO June 16, 2015.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
LAMP and INSTOOLS A configuration overview 118/05/2012 Raphael Dourado.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Linux Operations and Administration
System/SDWG Update Management Council Face-to-Face Flagstaff, AZ August 22-23, 2011 Sean Hardman.
Sponsored by the National Science Foundation Raven Provisioning Service Spiral 2 Year-end Project Review Department of Computer Science University of Arizona.
SPI NIGHTLIES Alex Hodgkins. SPI nightlies  Build and test various software projects each night  Provide a nightlies summary page that displays all.
David Adams ATLAS ATLAS Distributed Analysis and proposal for ATLAS-LHCb system David Adams BNL March 22, 2004 ATLAS-LHCb-GANGA Meeting.
Current GEMINI use of instrumentize script to initialize & configure services Hussam Nasir University of Kentucky.
Clearing house for all GENI news and documents GENI Architecture Concepts Global Environment for Network Innovations The GENI Project Office.
Active Directory Domain Services (AD DS). Identity and Access (IDA) – An IDA infrastructure should: Store information about users, groups, computers and.
IPDA Architecture Project International Planetary Data Alliance IPDA Architecture Project Report.
PlanetLab-Based Control Framework for GENI Larry Peterson Princeton University.
Designing a Federated Testbed as a Distributed System Robert Ricci, Jonathon Duerig, Gary Wong, Leigh Stoller, Srikanth Chikkulapelly, Woojin Seok 1.
SERVERS. General Design Issues  Server Definition  Type of server organizing  Contacting to a server Iterative Concurrent Globally assign end points.
ITIL: Service Transition
OAIS Producer (archive) Consumer Management
A Web-Based Data Grid Chip Watson, Ian Bird, Jie Chen,
GENI Exploring Networks of the Future
System Reengineering Restructuring or rewriting part or all of a system without changing its functionality Applicable when some (but not all) subsystems.
Presentation transcript:

Deconstructing PLC PlanetLab Developer’s Meeting May 13-14, 2008 Larry Peterson

Overview PlanetLab NG = GENI Prototype PlanetLab geniwrapper (Soner Sevinc) –PLC wrapper: prototype done, integration underway –NM wrapper: prototype in progress Wrapper includes… –interfaces –namespaces –security mechanisms Migration plan –seed registries from PLC’s DB –Current and new interfaces coexist –unbundle PLC over time –experiment with peering

Security Architecture Authorities –responsible for (vouch for) the objects they manage Global Identifier (GID) –actually a certificate –(UUID, HRN, PubKey, TTL) signed by chain of authorities Human Readable Name (HRN) –e.g., planetlab.eu.inria.p2p Credentials –slice: explicitly identifies permitted operations –component (aka ticket ): explicitly identifies resources (aka rspec )

Parts List Slice Interface - create & control slices/slivers Registry Interface - bind & resolve naming info Management Interface - query & reboot components Peering Interface - exchange resource rights Uber Researcher Interface - slice interface & so much more Slice Registry (SR) - SA responsible for users & slices - exports registry interface Slice Manager (SM) - creates & controls slices - exports researcher interface Aggregate Manager (AM) - responsible for a set of components - exports slice & mgmt interfaces Component Manager (CM) - controls a component - exports slice & mgmt interfaces Component Registry (CR) - MA responsible for components - exports registry interface

Case 1 PLC CMSMSRAMCM … User x Following focuses on slice creation and not node management; does not include CR (would be associated with each Aggregate).

Case 2 PLC CMSMSRAMCM … User SM Emulab x

Case 3 PLC CMSMSR AM CM … User SM AMCM … Emulab

Case 3a PLC CMSRCM … User SMAMCM … AM VINI x x

Case 4 PLC CMSRCM … User SM AMCM … AM SR PLE x

Case 4a PLC CMSRCM … User SMAMCM … AM PLE x SMSR User

Peering Issues N x N vs Hierarchy? –PLC, PLE, PLJ/PLA,… –VINI, GLabs,… –Emulab, DETER,… At what level? –Registry + Slice Interface –Peering Interface How rich is the policy? –slice count –sliver count –arbitrary resources

Meeting Notes The following slides report “roadmap” discussions from the meeting

Deconstructing PLC Modify current DB/API to support wrapper (Reid) Add “slice interface” to PLC wrapper (now have an aggregate) (Scott) Port “slice interface: to NM wrapper (now have a component) (Scott) –Revisit PLC/NM sync in light of delegation Specialize AM for VINI (understand topology) (Andy) Integrate wrapper GUI into PLC GUI (Reid) Implement a minimal SM = aggregate of aggregates (Aki) –Exports the slice interface, or something more? –Caches node info / remember where slice is embedded –Must be configurable -- what aggregates does it know (set policy) ä How is this module named & accessed? –Filters the list it gives back according to caller ä Can I present a full rspec to this call?

Deconstructing (cont) Longer-term issues –Federation outside the PL family ä 3rd party SM (delegation is important) ä Multiple-aggregate SMs relevant here –Worry about the “management interface” (currently private) ä Get emergency shutdown right ä What about killing slices on peer aggregate? –Overhaul security mechanisms ä Make sure security modules leave audit trail

Monitoring Software Package for distribution NM live-ness test –both PLC instantiated and delegated slice creation (Utah has code) Export monitor info to tech contacts –Uber monitor page (comon+monitor+…) –Place for techs to communicate with us (and track it) Make run-levels real (support tech intervention) –Give out root when in debug mode Maintain “known security issues” page

QA System Support virtual and physical test nodes –Use Emulab (potentially available as a std Emulab option) Package for distribution –OneLab uptake is an important milestone Make output logs readily available to developers (notification)

RSpec Discussion Define an rspec that works today –Today’s attributes (only those that users can set) –Works with wrapper slice interface Scope the rspec –Users can query/set themselves -- not in rspec –Admin can install themselves -- not in rspec –Requires privileged to establish (is allocate-able) -- is an rspec Extend today’s attribute set to include some new resource(s) –Allocate whole (non-sharable) physical device to a slice for some time –GRE tunnel keys –Supercharged PL node (motivates private attribute namespace) ä Specify parameters of hw fast-path: queues, buffers, bw, protocol… Contexts (usage scenarios) –Configuring nodes (this is something different) –Advertising resources (includes same language, but not limited to it) ä Descriptive, includes that which is allocate-able, but other info as well –Requesting / promising resources (definitely)

RSpec / Data Modeling Identify PL current attributes (Reid) Prepare draft data model (Mary) Get/install Eclipse EMF tool (watch the tutorial) –Extract code generator for Python (Scott) Put up web page for comment (Reid) Future –Embrace model in the PLC/DB –Revisit the over-the-wire representation (getSliver polling)

Resource Allocation Tickets are opaque –May be a table index –May be an rspec –May be a PLC DB entry (current implementation) Tickets split/reassigned/redeemed by their source –Source is only one that can interpret the ticket PlanetLab reality –PLC hands out tickets –Tickets redeemed/split at nodes ä Necessary to implement Sirius –PLC and nodes are in cahoots Alternative Interface –2D table of resources/time (owner of the allocation & slice) ä Calendar-like (trivially implement Sirius on top of this interface) –Ops: get/set_owner & get/set_slice ä Owner decides what slice gets to use / slice then consumes ä Set_slice like split (Split folds together slice & owner; split can’t revoke) –Also an escrow service that swaps rights (client of this interface) –Does this break PL’s “node state is soft” model? ä Client gets receipt & has to refresh node state