Download presentation
Presentation is loading. Please wait.
Published byDuane Spencer Modified over 9 years ago
2
Univ. of TehranDistributed Operating Systems1 Advanced Operating Systems University of Tehran Dept. of EE and Computer Engineering By: Dr. Nasser Yazdani Naming in Distributed Systems Lecture 12: Naming in Distributed Systems
3
Univ. of TehranDistributed Operating Systems2 Covered topic Naming system in DS. References Chapter 5 of the text book Chord
4
Univ. of TehranDistributed Operating Systems3 Outline What is Naming DNS X.500 Mobility Challenges.
5
Univ. of TehranDistributed Operating Systems4 Naming Names are used to share resources, uniquely identify entities and refer to locations Need to map from name to the entity it refers to E.g., Browser access to www.cnn.comwww.cnn.com Use name resolution Differences in naming in distributed and non- distributed systems Distributed systems: naming systems is itself distributed How to name mobile entities?
6
Univ. of TehranDistributed Operating Systems5 Learning objectives To understand the need for naming systems in distributed systems To be familiar with the design requirements for distributed name services To understand the operation of the Internet naming service - DNS To be familiar with the role of discovery services in mobile and ubiquitous computer systems *
7
Univ. of TehranDistributed Operating Systems6 The role of names and name services Resources are accessed using identifier or reference An identifier can be stored in variables and retrieved from tables quickly Identifier includes or can be transformed to an address for an object E.g. NFS file handle, Corba remote object reference A name is human-readable value (usually a string) that can be resolved to an identifier or address Internet domain name, file pathname, process number E.g./etc/passwd, http://www.cdk3.net/ For many purposes, names are preferable to identifiers because the binding of the named resource to a physical location is deferred and can be changed because they are more meaningful to users Resource names are resolved by name services to give identifiers and other useful attributes *
8
Univ. of TehranDistributed Operating Systems7 Requirements for name spaces Allow simple but meaningful names to be used Potentially infinite number of names Structured to allow similar subnames without clashes to group related names Allow re-structuring of name trees for some types of change, old programs should continue to work Management of trust *
9
Univ. of TehranDistributed Operating Systems8 Naming Concepts Name What you call something Address Where it is located Route How one gets to it But it is not that clear anymore, it depends on perspective. A name from one perspective may be an address from another. Perspective means layer of abstraction What is http://www.isi.edu/~dongho ?
10
Univ. of TehranDistributed Operating Systems9 Things we name Users To direct, and to identify Hosts (computers) High level and low level Services Service and instance Files and other “objects” Content and repository Groups Of any of the above
11
Univ. of TehranDistributed Operating Systems10 How we name things Host-Based Naming Host-name is required part of object name Global Naming Must look-up name in global database to find address Name transparency User/Object Centered Naming Namespace is centered around user or object Attribute-Based Naming Object identified by unique characteristics Related to resource discovery / search / indexes
12
Univ. of TehranDistributed Operating Systems11 Namespace A name space maps: X O At a particular point in time. The rest of the definition, and even some of the above, is open to discussion/debate. What is a “flat namespace” Implementation issue
13
Univ. of TehranDistributed Operating Systems12 Scalability of naming Scalability Ability to continue to operate efficiently as a system grows large, either numerically, geographically, or administratively. Affected by Frequency of update Granularity Evolution/reconfiguration DNS characteristics Multi-level implementation Replication of root and other servers Multi-level caching
14
Univ. of TehranDistributed Operating Systems13 Name Spaces (1) Hierarchical directory structure (DAG) Each file name is a unique path in the DAG Resolution of /home/steen/mbox a traversal of the DAG File names are human-friendly.
15
Univ. of TehranDistributed Operating Systems14 Linking and Mounting (1) The concept of a symbolic link explained in a naming graph.
16
Univ. of TehranDistributed Operating Systems15 Linking and Mounting (2) Mounting remote name spaces through a specific process protocol.
17
Univ. of TehranDistributed Operating Systems16 Linking and Mounting (3) Organization of the DEC Global Name Service
18
Univ. of TehranDistributed Operating Systems17 Resolving File Names across Machines Remote files are accessed using a node name, path name NFS mount protocol: map a remote node onto local DAG Remote files are accessed using local names! (location independence) OS maintains a mount table with the mappings
19
Univ. of TehranDistributed Operating Systems18 Name Space Distribution Naming in large distributed systems System may be global in scope (e.g., Internet, WWW) Name space is organized hierarchically Single root node (like naming files) Name space is distributed and has three logical layers Global layer: highest level nodes (root and a few children) Represent groups of organizations, rare changes Administrational layer: nodes managed by a single organization Typically one node per department, infrequent changes Managerial layer: actual nodes Frequent changes Zone: part of the name space managed by a separate name server
20
Univ. of TehranDistributed Operating Systems19 Name Space Distribution (1) An example partitioning of the DNS name space, including Internet-accessible files, into three layers.
21
Univ. of TehranDistributed Operating Systems20 Name Space Distribution (2) A comparison between name servers for implementing nodes from a large-scale name space partitioned into a global layer, as an administrational layer, and a managerial layer. The more stable a layer, the longer are the lookups valid (and can be cached longer) ItemGlobalAdministrationalManagerial Geographical scale of network WorldwideOrganizationDepartment Total number of nodes FewManyVast numbers Responsiveness to lookups SecondsMillisecondsImmediate Update propagation LazyImmediate Number of replicas ManyNone or fewNone Is client-side caching applied? Yes Sometimes
22
Univ. of TehranDistributed Operating Systems21 Implementation of Name Resolution (1) Iterative name resolution Start with the root Each layer resolves as much as it can and returns address of next name server.
23
Univ. of TehranDistributed Operating Systems22 Implementation of Name Resolution (2) Recursive name resolution Start at the root Each layer resolves as much as it can and hands the rest to the next layer
24
Univ. of TehranDistributed Operating Systems23 Which is better? Recursive name resolution puts heavy burden on global layer nodes Burden is heavy => typically support only iterative resolution Advantages of recursive name resolution Caching possible at name servers (gradually learn about others) Caching improves performance Use time-to-live values to impose limits on caching duration Results from higher layers can be cached for longer periods Iterative: only caching at client possible
25
Univ. of TehranDistributed Operating Systems24 Implementation of Name Resolution (3) Recursive name resolution of. Name servers cache intermediate results for subsequent lookups. Server for node Should resolve Looks up Passes to child Receives and caches Returns to requester cs # -- # vu # # # # ni # # # # # # root # # # # # # # #
26
Univ. of TehranDistributed Operating Systems25 Implementation of Name Resolution (4) The comparison between recursive and iterative name resolution with respect to communication costs. Recursive may be cheaper
27
Univ. of TehranDistributed Operating Systems26 DNS Name Space The most important types of resource records forming the contents of nodes in the DNS name space. Type of record Associated entity Description SOAZoneHolds information on the represented zone AHostContains an IP address of the host this node represents MXDomainRefers to a mail server to handle mail addressed to this node SRVDomainRefers to a server handling a specific service NSZoneRefers to a name server that implements the represented zone CNAMENodeSymbolic link with the primary name of the represented node PTRHostContains the canonical name of a host HINFOHostHolds information on the host this node represents TXTAny kindContains any entity-specific information considered useful
28
Univ. of TehranDistributed Operating Systems27 DNS Implementation (1) An excerpt from the DNS database for the zone cs.vu.nl.
29
Univ. of TehranDistributed Operating Systems28 DNS Implementation (2) Part of the description for the vu.nl domain which contains the cs.vu.nl domain. NameRecord typeRecord value cs.vu.nlNISsolo.cs.vu.nl A130.37.21.1
30
Univ. of TehranDistributed Operating Systems29 X.500 Directory Service OSI Standard Directory service: special kind of naming service where: Clients can lookup entities based on attributes instead of full name Real-world example: Yellow pages: look for a plumber
31
Univ. of TehranDistributed Operating Systems30 The X.500 Name Space (1) A simple example of a X.500 directory entry using X.500 naming conventions. AttributeAbbr.Value CountryCNL LocalityLAmsterdam OrganizationLVrije Universiteit OrganizationalUnitOUMath. & Comp. Sc. CommonNameCNMain server Mail_Servers--130.37.24.6, 192.31.231,192.31.231.66 FTP_Server--130.37.21.11 WWW_Server--130.37.21.11
32
Univ. of TehranDistributed Operating Systems31 The X.500 Name Space (2) Part of the directory information tree.
33
Univ. of TehranDistributed Operating Systems32 The X.500 Name Space (3) Two directory entries having Host_Name as RDN (Relative Distinguished Name). AttributeValueAttributeValue CountryNLCountryNL LocalityAmsterdamLocalityAmsterdam OrganizationVrije UniversiteitOrganizationVrije Universiteit OrganizationalUnitMath. & Comp. Sc.OrganizationalUnit Math. & Comp. Sc. CommonNameMain serverCommonNameMain server Host_NamestarHost_Namezephyr Host_Address192.31.231.42Host_Address192.31.231.66
34
Univ. of TehranDistributed Operating Systems33 Caching in the Domain Name System edu usc isi venera cache aludra Iterative query Lookup(venera.isi.edu) 1 2 3
35
Univ. of TehranDistributed Operating Systems34 Caching in the Domain Name System cache edu usc isi venera cache aludra Chained query Lookup(venera.isi.edu) 1a 23 4
36
Univ. of TehranDistributed Operating Systems35 Closure Closure binds an object to the namespace within which names embedded in the object are to be resolved. Namespace may be static or dynamic Historical binding of names “Object” may as small as the name itself GNS binds the names to namespaces Prospero binds enclosing object to multiple namespaces Tilde and quicksilver bind users to namespaces NFS mount table constructs system centered namespace Movement of objects can cause problems When closure is associated with wrong entity
37
Univ. of TehranDistributed Operating Systems36 Other implementations of naming Broadcast Limited scalability, but faster local response Prefix tables Essentially a form of caching Capabilities Combines security and naming Traditional name service built over capability based addresses
38
Univ. of TehranDistributed Operating Systems37 Advanced Name Systems DEC’s Global Naming Support for reorganization the key idea Little coordination needed in advance Half Closure Names are all tagged with namespace identifiers DID - Directory Identifier Hidden part of name - makes it global Upon reorganization, new DID assigned Old names relative to old root But the DID’s must be unique - how do we assign?
39
Univ. of TehranDistributed Operating Systems38 Prospero Directory Service Multiple namespace centered around a “root” node that is specific to each namespace. Closure binds objects to this “root” node. Used today as an embedded directory service. Layers of naming User level names are “object” centered Objects still have an address which is global Namespaces also have global addresses Customization in Prospero Filters create user level derived namespaces on the fly Union links support merging of views
40
Univ. of TehranDistributed Operating Systems39 Resource Discovery Similar to naming Browsing related to directory services Indexing and search similar to attribute based naming Attribute based naming Profile Multi-structured naming Search engines Computing resource discovery
41
Univ. of TehranDistributed Operating Systems40 The Web Object handles Uniform Resource Locators (URL’s) Is it a name or an address? Uniform Resource Names (URN’s) Is a directory service required How URL’s are misused XML Definitions provide a form of closure Conceptual level rather than the “namespace” level.
42
Univ. of TehranDistributed Operating Systems41 LDAP and Active Directory Manage information about users, services Lighter weight than X.500 DAP Heavier than DNS Applications have conventions on where to look Often data is duplicated because of multiple conventions Performance enhancements not as well defined Caching harder because of less constrained patterns of access Referral mechanisms under development
43
Univ. of TehranDistributed Operating Systems42 LDAP Lightweight Directory Access Protocol (LDAP) X.500 too complex for many applications LDAP: Simplified version of X.500 Widely used for Internet services Application-level protocol, uses TCP Lookups and updates can use strings instead of OSI encoding Use master servers and replicas servers for performance improvements Example LDAP implementations: Active Directory (Windows 2000) Novell Directory services iPlanet directory services (Netscape) Typical uses: user profiles, access privileges, network resources
44
Univ. of TehranDistributed Operating Systems43 Naming versus Locating Entities a) Direct, single level mapping between names and addresses. b) T-level mapping using identities.
45
Univ. of TehranDistributed Operating Systems44 Forwarding Pointers (1) The principle of forwarding pointers using (proxy, skeleton) pairs.
46
Univ. of TehranDistributed Operating Systems45 Forwarding Pointers (2) Redirecting a forwarding pointer, by storing a shortcut in a proxy.
47
Univ. of TehranDistributed Operating Systems46 Home-Based Approaches The principle of Mobile IP.
48
Univ. of TehranDistributed Operating Systems47 Hierarchical Approaches (1) Hierarchical organization of a location service into domains, each having an associated directory node.
49
Univ. of TehranDistributed Operating Systems48 Hierarchical Approaches (2) An example of storing information of an entity having two addresses in different leaf domains.
50
Univ. of TehranDistributed Operating Systems49 Hierarchical Approaches (3) Looking up a location in a hierarchically organized location service.
51
Univ. of TehranDistributed Operating Systems50 Hierarchical Approaches (4) a) An insert request is forwarded to the first node that knows about entity E. b) A chain of forwarding pointers to the leaf node is created.
52
Univ. of TehranDistributed Operating Systems51 Pointer Caches (1) Caching a reference to a directory node of the lowest-level domain in which an entity will reside most of the time.
53
Univ. of TehranDistributed Operating Systems52 Pointer Caches (2) A cache entry that needs to be invalidated because it returns a nonlocal address, while such an address is available.
54
Univ. of TehranDistributed Operating Systems53 Scalability Issues The scalability issues related to uniformly placing subnodes of a partitioned root node across the network covered by a location service.
55
Univ. of TehranDistributed Operating Systems54 The Problem of Unreferenced Objects An example of a graph representing objects containing references to each other.
56
Univ. of TehranDistributed Operating Systems55 Reference Counting (1) The problem of maintaining a proper reference count in the presence of unreliable communication.
57
Univ. of TehranDistributed Operating Systems56 Reference Counting (2) a) Copying a reference to another process and incrementing the counter too late b) A solution.
58
Univ. of TehranDistributed Operating Systems57 Advanced Referencing Counting (1) a) The initial assignment of weights in weighted reference counting b) Weight assignment when creating a new reference.
59
Univ. of TehranDistributed Operating Systems58 Advanced Referencing Counting (2) c) Weight assignment when copying a reference.
60
Univ. of TehranDistributed Operating Systems59 Advanced Referencing Counting (3) Creating an indirection when the partial weight of a reference has reached 1.
61
Univ. of TehranDistributed Operating Systems60 Advanced Referencing Counting (4) Creating and copying a remote reference in generation reference counting.
62
Univ. of TehranDistributed Operating Systems61 Tracing in Groups (1) Initial marking of skeletons.
63
Univ. of TehranDistributed Operating Systems62 Tracing in Groups (2) After local propagation in each process.
64
Univ. of TehranDistributed Operating Systems63 Tracing in Groups (3) Final marking.
65
64 DHT: Overview Abstraction: a distributed “hash-table” (DHT) data structure: put(id, item); item = get(id); Implementation: nodes in system form a distributed data structure Can be Ring, Tree, Hypercube, Skip List, Butterfly Network,...
66
65 DHT: Overview (2) Structured Overlay Routing: Join: On startup, contact a “bootstrap” node and integrate yourself into the distributed data structure; get a node id Publish: Route publication for file id toward a close node id along the data structure Search: Route a query for file id toward a close node id. Data structure guarantees that query will meet the publication. Important difference: get(key) is for an exact match on key! search(“spars”) will not find file(“briney spars”) We can exploit this to be more efficient
67
66 DHT: Example - Chord Associate to each node and file a unique id in an uni-dimensional space (a Ring) E.g., pick from the range [0...2 m ] Usually the hash of the file or IP address Properties: Routing table size is O(log N), where N is the total number of nodes Guarantees that a file is found in O(log N) hops from MIT in 2001
68
67 DHT: Consistent Hashing N32 N90 N105 K80 K20 K5 Circular ID space Key 5 Node 105 A key is stored at its successor: node with next higher ID
69
68 DHT: Chord Basic Lookup N32 N90 N105 N60 N10 N120 K80 “Where is key 80?” “N90 has K80”
70
69 DHT: Chord “Finger Table” N80 1/2 1/4 1/8 1/16 1/32 1/64 1/128 Entry i in the finger table of node n is the first node that succeeds or equals n + 2 i In other words, the ith finger points 1/2 n-i way around the ring
71
Node Join Compute ID Use an existing node to route to that ID in the ring. Finds s = successor(id) ask s for its predecessor, p Splice self into ring just like a linked list p->successor = me me->successor = s me->predecessor = p s->predecessor = me 70
72
71 DHT: Chord Join Assume an identifier space [0..8] Node n1 joins 0 1 2 3 4 5 6 7 i id+2 i succ 0 2 1 1 3 1 2 5 1 Succ. Table
73
72 DHT: Chord Join Node n2 joins 0 1 2 3 4 5 6 7 i id+2 i succ 0 2 2 1 3 1 2 5 1 Succ. Table i id+2 i succ 0 3 1 1 4 1 2 6 1 Succ. Table
74
73 DHT: Chord Join Nodes n0, n6 join 0 1 2 3 4 5 6 7 i id+2 i succ 0 2 2 1 3 6 2 5 6 Succ. Table i id+2 i succ 0 3 6 1 4 6 2 6 6 Succ. Table i id+2 i succ 0 1 1 1 2 2 2 4 0 Succ. Table i id+2 i succ 0 7 0 1 0 0 2 2 2 Succ. Table
75
74 DHT: Chord Join Nodes: n1, n2, n0, n6 Items: f7, f2 0 1 2 3 4 5 6 7 i id+2 i succ 0 2 2 1 3 6 2 5 6 Succ. Table i id+2 i succ 0 3 6 1 4 6 2 6 6 Succ. Table i id+2 i succ 0 1 1 1 2 2 2 4 0 Succ. Table 7 Items 1 i id+2 i succ 0 7 0 1 0 0 2 2 2 Succ. Table
76
75 DHT: Chord Routing Upon receiving a query for item id, a node: Checks whether stores the item locally If not, forwards the query to the largest node in its successor table that does not exceed id 0 1 2 3 4 5 6 7 i id+2 i succ 0 2 2 1 3 6 2 5 6 Succ. Table i id+2 i succ 0 3 6 1 4 6 2 6 6 Succ. Table i id+2 i succ 0 1 1 1 2 2 2 4 0 Succ. Table 7 Items 1 i id+2 i succ 0 7 0 1 0 0 2 2 2 Succ. Table query(7)
77
76 DHT: Chord Summary Routing table size? Log N fingers Routing time? Each hop expects to 1/2 the distance to the desired id => expect O(log N) hops.
78
77 DHT: Discussion Pros: Guaranteed Lookup O(log N) per node state and search scope Cons: This line used to say “not used.” But: Now being used in a few apps, including BitTorrent. Supporting non-exact match search is (quite!) hard
79
Univ. of TehranDistributed Operating Systems78 Next Lecture Process migration Chapter 3 of the book
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.