Download presentation
Presentation is loading. Please wait.
Published byAngelina Harrington Modified over 8 years ago
1
1 Data and Metadata Architectures in a Robust Semantic Grid Chinese Academy of Sciences July 28 2006 Geoffrey Fox Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN 47401 http://grids.ucs.indiana.edu/ptliupages/presentations/ gcf@indiana.edugcf@indiana.edu http://www.infomall.orghttp://www.infomall.org
2
2 Status of Grids and Standards I It is interesting to examine Grid architectures both to see how to build great new systems but also to look at linking Grids together and making them (or parts of them) interoperable There is agreement that one should use Web Services with WSDL and SOAP and not so much agreement after that But use non SOAP transport like GridFTP Can divide Service areas into General Infrastructure Compute Grids Data and Information Grids Other …..
3
3 Status of Grids and Standards II General Infrastructure covers area where Industry, OASIS and W3C are building the pervasive Web service environment There are important areas of debate and vigorous technical evolution but these are within confined areas Relatively clear how to adapt between different choices Examples of areas of some contoversy Security critical but commercial, academic institution and Grid project solutions still evolving Workflow has many choices and BPEL not clearly consensus standard; differences between control and data flow Architecture of Service discovery understood but skepticism that UDDI appropriate; it keeps getting improved In Management, Notification and Reliable Messaging, there are multiple standards but rather trivial to map between them WSRF symbolizes disagreements in state (which is roughly meta-data area) but roughly this is question as to whether metadata in message or context service or hidden in application Data transport model unclear: GridFTP v. BitTorrent v. Fast XML
4
4 The Ten areas covered by the 60 core WS-* Specifications WS-* Specification AreaExamples 1: Core Service ModelXML, WSDL, SOAP 2: Service InternetWS-Addressing, WS-MessageDelivery; Reliable Messaging WSRM; Efficient Messaging MOTM 3: NotificationWS-Notification, WS-Eventing (Publish-Subscribe) 4: Workflow and TransactionsBPEL, WS-Choreography, WS-Coordination 5: SecurityWS-Security, WS-Trust, WS-Federation, SAML, WS-SecureConversation 6: Service DiscoveryUDDI, WS-Discovery 7: System Metadata and StateWSRF, WS-MetadataExchange, WS-Context 8: ManagementWSDM, WS-Management, WS-Transfer 9: Policy and AgreementsWS-Policy, WS-Agreement 10: Portals and User InterfacesWSRP (Remote Portlets)
5
5 Activities in Open Grid Forum Working Groups GGF AreaGS-* and OGSA Standards Activities 1: ArchitectureHigh Level Resource/Service Naming (level 2 of slide 6), Integrated Grid Architecture 2: ApplicationsSoftware Interfaces to Grid, Grid Remote Procedure Call, Checkpointing and Recovery, Interoperability to Job Submittal services, Information Retrieval, 3: ComputeJob Submission, Basic Execution Services, Service Level Agreements for Resource use and reservation, Distributed Scheduling 4: DataDatabase and File Grid access, Grid FTP, Storage Management, Data replication, Binary data specification and interface, High-level publish/subscribe, Transaction management 5: InfrastructureNetwork measurements, Role of IPv6 and high performance networking, Data transport 6: ManagementResource/Service configuration, deployment and lifetime, Usage records and access, Grid economy model 7: SecurityAuthorization, P2P and Firewall Issues, Trusted Computing
6
6 The NCES/WS-*/GS-* Features/Service Areas I Service or FeatureWS-* GS-* NCES (DoD) Comments A: Broad Principles FS1: Use SOA: Service Oriented Arch. WS1Core Service Architecture, Build Grids on Web Services. Industry best practice FS2: Grid of GridsStrategy for legacy subsystems: modular architecture B: Core Services (Mainly Service Infrastructure and W3C/OASIS focus) FS3: Service Internet, Messaging WS2NCES3Core Infrastructure including reliability, publish- subscribe messaging cf. FS13C FS4: NotificationWS3NCES3JMS, MQSeries, WS-Eventing, Notification FS5: WorkflowWS4NCES5Grid Programming FS6: SecurityWS5GS7NCES2Grid-Shib, Permis Liberty Alliance... FS7: DiscoveryWS6NCES4UDDI and extensions FS8: System Metadata & State WS7Globus MDS Semantic Grid, WS-Context FS9: ManagementWS8GS6NCES1CIM FS10: PolicyWS9ECS FS11: Portals and UsersWS10NCES7Portlets JSR168, NCES Capability Interfaces
7
7 Grids of Grids of Simple Services Grids are managed collections of one or more services A simple service is the smallest Grid Services and Grids are linked by messages Internally to service, functionalities are linked by methods Link serices via methods messages streams We are familiar with method-linked hierarchy Lines of Code Methods Objects Programs Packages Overlay and Compose Grids of Grids MethodsServicesComponent Grids CPUsClusters Compute Resource Grids MPPs Databases Federated Databases SensorSensor Nets Data Resource Grids
8
8 Mediation and Transformation in a Grid of Grids and Simple Services Port Internal Interfaces Grid or Service Port Internal Interfaces Grid or Service Port Internal Interfaces Grid or Service Mediation and Transformation Services Distributed Brokers between distributed ports External facing Interfaces Mediation and Transformation Services Listen, Queue Transform, Send Mediation and Transformation Services 1-10 ms Overhead Use “OGSA” to Federate?
9
9 The NCES/WS-*/GS-* Features/Service Areas II Service or FeatureWS-*GS-*NCESComments B: Core Services (Mainly Higher level and OGF focus) FS12: ComputingGS3Job Management major Grid focus FS13A: Data as Repositories: Files and Databases GS4 NCES8 Distributed Files, OGSA-DAI Managed Data is FS14B FS13B: Data as Sensors and Instruments OGC SensorML FS13C: Data TransportWS 2,3 GS4 NCES3,8 GridFTP or WS Interface to non SOAP transport FS14A: Information as MonitoringGS4Major Grid effort for job status etc. FS14B: Information, Knowledge, Wisdom part of D(ata)IKW GS4 NCES8 VOSpace for IVOA, JBI for DoD, WFS for OGC Federation at this layer major research area NCOW Data Strategy FS15: Applications and User ServicesGS2 NCES9 Standalone Services Proxies for jobs FS16: Resources and InfrastructureGS5Ad-hoc networks; Network Monitoring FS17: Collaboration and Virtual Organizations GS7 NCES6 XGSP, Shared Web Service ports FS18: Scheduling and matching of Services and Resources GS3Current work only addresses scheduling “batch jobs”. Need networks and services
10
10 Interoperability etc. for FS11-14 The higher level services are harder as the systems are more complicated and less agreement on where standards should be defined OGF has JSDL, BES (Basic Execution Services) but might be better to set standards at a different level i.e. users might prefer to view Condor or GT4 as collections of services as the interface Idea is that maybe we should consider high level capabilities as Grids (an EGEE or “Condor” compute Grid for example whose internals are black boxes for users) and then you need two types of interfaces Internal interfaces like JSDL defining how the Condor Grid interacts internally with a computer External Interfaces defining how one sets up a complex problem (maybe with lots of individual jobs as in SETI@Home) for a Compute Grid
11
Enabling Grids for E-sciencE INFSO-RI-508833 11 gLite Grid Middleware Services API Access Workload Mgmt Services Computing Element Workload Management Metadata Catalog Data Management Storage Element Data Movement File & Replica Catalog Authorization Security Services Authentication Information & Monitoring Information & Monitoring Services Application Monitoring Connectivity Accounting Auditing Job Provenance Package Manager CLI
12
12 DIRAC Architecture DIRAC Job Management Service DIRAC Job Management Service DIRAC CE DIRAC Sites Agent Production manager Production manager GANGA UI User CLI JobMonitorSvc JobAccountingSvc AccountingDB Job monitor ConfigurationSvc FileCatalogSvc BookkeepingSvc BK query webpage BK query webpage FileCatalog browser FileCatalog browser DIRAC services DIRAC Storage DiskFile gridftp LCG Resource Broker Resource Broker CE 1 CE 2 CE 3 Agent DIRAC resources FileCatalog Agent
13
7 Sept 2005David Evans13 Old AliEn Framework 100% perl5 SOAP Local Site elements Central services User
14
14 Database SS SSSSSSSSS FS FSFS Portal FSFS OSOS OSOS OSOS OSOS OSOS OSOS OSOS OSOS OSOS OSOS OSOS OSOS MD MetaData Filter Service Sensor Service Other Service Another Grid Raw Data Data Information Knowledge Wisdom Decisions S S Another Service S Another Grid S SS FS SOAP Messages Portal OSOS OSOS FS OSOS OSOS MD FS Grids of Grids Architecture
15
15 Data-Information-Knowledge-Wisdom Pipeline DIKW i represent different forms of DIKW with different terminology in different fields. Each DIKW i has a resource view describing its physical instantiation (different distributed media with file, database, memory, stream etc.) and An access view describing its query model (dir or ls, SQL, XPATH, Custom etc.). The different forms DIKW i are linked by filtering steps F. This could be a simple format translation; a complex calculation as in the running of an LHC event processing code; a proprietary analysis as in a Search engines processing of harvested web pages; an addition of a metadata catalog to a collection of files. DIKW1 Access View F DIKW2 Access View F DIKW3 Access View F Resource View Resource View Resource View
16
16 DIKW Pipeline II Each DIKW can be a complete data grid The resource view is typified by standards like ODBC, JDBC, OGSA-DAI and is internal to DIKW Grid A name-value resource view is exemplified by Javaspaces (tuple model) and WS-Context The access (user) view is external view of a data grid and does not have such a clear model but rather Systems like SRB (Storage Resource Broker) that virtualize file collections WebDAV supports the distributed file access view VOSpace from astronomy community is viewed by some as an abstraction of SRB WFS Web Feature Service from Open Geospatial Consortium is an important example
17
17 WMS uses WFS that uses data sources Northridge2 Wald D. J. -118.72,34.243 - 118.591,34.176
18
18 Managed Data Most grids have a managed data component (which we call a “Managed Data Grid”) Managed data can consist of the data and one or more metadata catalogs Metadata catalogs can contain semantic information enabling more precise access to the “data” Replica catalogs (managing multiple file copies) are another metadata catalog SRB and Digital libraries have this architecture with mechanisms to keep multiple metadata copies coherent RDF has clear relevance However there is no clear consensus as to how to build a Managed Data Grid
19
19 Resource and User Views Federation implies we integrate (virtualize) N data systems which could be heterogeneous Sometimes you can choose where to federate but sometimes you can only federate at user view In Astronomy Grids there are several (~20) different data sources (collections) corresponding to different telescopes. These are built on traditional bases but expose astronomy query interface (VOQL etc.) and one cannot federate at database level Geographical Information Systems GIS are built on possibly spatially enhanced databases but expose WFS or WMS OGC interfaces To make a map of Indiana you need to combine the GIS of 92 separate counties; this cannot be done at database level More generally when we linking black box data repositories to the Grid, we can only federate at the interfaces exposed by the black box Resource View DIKW1 Access View Resource View DIKW2 Access View Resource View DIKW3 Access View Resource View DIKW4 Access View User Level Federation Resource Level Federation
20
20 Metadata Systems I: Applications Semantic description of data (for each application) Replica Catalog UDDI or other service registry VOMS or equivalent (PERMIS) authorization catalog Compute Grid static resource metadata Compute Grid dynamic events And implicitly metadata defining workflow, state etc. which can be stored in messages and/or catalogs (databases) Why not unify the resource view of these?
21
21 Metadata Systems II: Implementations There are also many WS-* specifications addressing meta-data defined broadly WS-MetadataExchange WS-RF UDDI WS-ManagementCatalog And many different implementations from (extended) UDDI through MCAT of the Storage Research Broker And of course representations including RDF and OWL Further there is system metadata (such as UDDI for core services) and metadata catalogs for each application domain They have different scope and different QoS trade-offs e.g. Distributed Hash Tables (Chord) to achieve scalability in large scale networks WS-Context ASAP WBEM WS-GAF
22
22 Different Trade-offs It has never been clear to me how a poor lonely service is meant to know where to look up meta-data and if it is meant to be thought up as a database (UDDI, WS-Context) or as the contents of a message (WS-RF, WS-MetadataExchange) We identified two very distinct QoS tradeoffs 1) Large scale relatively static metadata as in (UDDI) catalog of all the world’s services 2) Small scale highly dynamic metadata as in dynamic workflows for sensor integration and collaboration Fault-tolerance and ability to support dynamic changes with few millisecond delay But only a modest number of involved services (up to 1000’s in a session) Need Session NOT Service/Resource meta-data so don’t use WS-RF
23
23 XML Databases of Importance We choose a message based interface to a backend database We built two pieces of technology with different trade-offs but each could store any meta-data but with different QoS WS-Context designed for controlling a dynamic workflow (Extended) UDDI exemplified by semantic service discovery WFS provides general application specific XML data/meta-data repository built on top of a hybrid system supported by UDDI and WS-Context These have different performance, scalability and data unit size requirement In our implementation, each is currently “just an Oracle/MySQL” database (with Javaspaces cache in WS- Context) front ended by filters that convert between XML (GML for WFS) and object-relational Schema Example of Semantics (XML) versus representation (SQL) OGSA-DAI offers Grid interface to databases – we could use this internally but don’t as we only need to expose external interfaces WFS and not MySQL to Grid
24
24 Extended UDDI XML Metadata Service (alternative to OGC Web Registry Services) supports WFS GIS Metadata Catalog (functional metadata), user- defined metadata ((name, value) pairs), up-to-date service information (leasing), dynamically updated registry entries. Our approach enables advanced query capabilities geo-spatial and temporal queries, metadata oriented queries, domain independent queries such as XPATH, XQuery on metadata catalog. http://www.opengrids.org/extendeduddi/index.html WFS: Geographical Information System compatible XML Metadata Services
25
25 Context as Service Metadata We define all metadata (static, semi-static, dynamic) relevant to a service as “Context”. Context can be associated to a single service, a session (service activity) or both. Context can be independent of any interaction slowly varying, quasi-static context Ex: type or endpoint of a service, less likely to change Context can be generated as result of service interactions dynamic, highly updated context information associated to an activity or session Ex: session-id, URI of the coordinator of a workflow session
26
26 Hybrid XML Metadata Services –> WS-Context + extendedUDDI We combine functionalities of these two services: WS- Context AND extendedUDDI in one hybrid service to manage Context (service metadata). WS-Context controlling a workflow (Extended) UDDI supporting semantic service discovery This approach enables uniform query capabilities on service metadata catalog. http://www.opengrids.org/wscontext/index.html
27
27 Information Service WSDL IS Client WSDL HTTP(S) WSDL IS Client DB JDBC Extended WS-Context Service dynamic metadata IS Client WSDL DB JDBC Extended UDDI Registry Service UDDI Version 3.0 WSDL Service Interface Descriptions uddi_api_v3_portType.wsdl WSDL WS-Context Ver1.0 ws-context.wsdl WSDL interaction-independent relatively static metadata WSDL Optimized for Scalability Optimized for Performance
28
28 Generalizing a GIS Geographical Information Systems GIS have been hugely successful in all fields that study the earth and related worlds They define Geography Syntax (GML) and ways to store, access, query, manipulate and display geographical features In SOA, GIS corresponds to a domain specific XML language and a suite of services for different functions above However such a universal information model has not been developed in other areas even though there are many fields in which it appears possible BIS Biological Information System MIS Military Information System IRIS Information Retrieval Information System PAIS Physics Analysis Information System SIIS Service Infrastructure Information System
29
29 ASIS Application Specific Information System I a) Discovery capabilities that are best done using WS-* standards b) Domain specific metadata and data including search/store/access interface. (cf WFS). Lets call generalization ASFS (Application Specific Feature Service) Language to express domain specific features (cf GML). Lets call this ASL (Application Specific language) Tools to manipulate information expressed in language and key data of application (cf coordinate transformations). Lets call this ASTT (Application specific Tools and Transformations) ASL must support Data sources such as sensors (cf OGC metadata and data sensor standards) and repositories. Sensors need (common across applications) support of streams of data Queries need to support archived (find all relevant data in past) and streaming (find all data in future with given properties) Note all AS Services behave like Sensors and all sensors are wrapped as services Any domain will have “raw data” (binary) and that which has been filtered to ASL. Lets call ASBD (Application Specific Binary Data)
30
30 ASIS Application Specific Information System II Lets call this ASVS (Application Specific Visualization Services) generalizing WMS for GIS The ASVS should both visualize information and provide a way of navigating (cf GetFeatureInfo) database (the ASFS) The ASVS can itself be federated and presents an ASFS output interface d) There should be application service interface for ASIS from which all ASIS service inherit e) There will be other user services interfacing to ASIS All user and system services will input and output data in ASL using filters to cope with ASBD AS Tool (generic) AS “Sensor” AS Repository AS Service (user defined) ASVS Display AS Tool (generic) Messages using ASL Filter, Transformation, Reasoning, Data-mining, Analysis
31
31 Handheld Flexible Representation (HHFR) is an open source software for fast communication in mobile Web Services. HHFR supports: streaming messages, separation of message contents and usage of context store. http://www.opengrids.org/hhfr/index.html We use WS-Context service as context-store for redundant message parts of the SOAP messages. redundant data is static XML fragments encoded in every SOAP message Redundant metadata is stored as context associated to service conversion in place The empirical results show that we gain 83% in message size and on avg. 41% on transit time by using WS-Context service. Application – Context Store usage in communication of mobile Web Services
32
32 Optimizing Grid/Web Service Messaging Performance The performance and efficiency of Web Services can be greatly increased in conversational and streaming message exchanges by removing the redundant parts of the SOAP message.
33
33 Performance with and without Context-store Message Size Without Context-storeWith Context-store Ave.±errorStddevAve.±errorStddev Medium: 513byte (sec)2.76±0.0340.1871.75±0.0400.217 Large: 2.61KB (sec)5.20±0.1580.8672.81±0.0980.538 Experiments ran over HHFR Optimized message exchanged over HHFR after saving redundant/unchanging parts to the Context-store Save on average 83% of message size, 41% of transit time Summary of the Round Trip Time ( T RTT )
34
34 System Parameters T access : time to access to a Context-store (i.e. save a context or retrieve a context to/from the Context-store) from a mobile client T RTT : Round Trip Time to exchange message through a HHFR channel N: number of simultaneous streams supported by stream summed over ALL mobile clients T wsctx : time to process setContext operation T axis : time consumed for Axis process T trans : transmission time through network T stream : stream length
35
35 Context-store: System Parameters
36
36 Summary of T axis and T wsctx measurements T access = T wsctx + T axis + T trans Data binding overhead at Web Service Container is the dominant factor to message processing
37
37 C hhfr = nt hhfr + O a + O b C soap = nt soap Breakeven point: n be t hhfr + O a + O b = n be t soap O a (WS) is roughly 20 milliseconds Performance Model and Measurements Average±error (sec)Stddev (sec) Context-store Access ( O a ) 4.127±0.0420.516 Negotiation ( O b ) 5.133±0.0360.825 O a : overhead for accessing the Context-store Service O b : overhead for negotiation
38
38 Core Features of Management Architecture Remote Management Allow management irrespective of the location of the resource (as long as that resource is reachable via some means) Traverse firewalls and NATs Firewalls complicate management by disabling access to some transports and access to internal resources Utilize tunneling capabilities and multi-protocol support of messaging infrastructure Extensible Management capabilities evolve with time. We use a service oriented architecture to provide extensibility and interoperability Scalable Management architecture should be scale as number of managees increases Fault-tolerant Management itself must be fault-tolerant. Failure of transports OR management components should not cause management architecture to fail.
39
39 Management System built in terms of Bootstrap System – Robust itself by Replication Registry for metadata (distributed database) – Robust by standard database techniques and our system itself for Service Interfaces NaradaBrokering for robust tunneled messages – NB itself robust using our system Managers – Easy to make robust using our system; these are essentially agents Managees – what you are managing – Our system makes robust – There is NO assumption that Managed system uses NB
40
40 Basic Management Architecture I Registry Stores system state. Fault-tolerant through replication Could be a global registry OR separate registries for each domain (later slide) Current implementation uses a simple in- memory system Will use our WS - Context service as our registry (Service/Message Interface to in-memory JavaSpaces cache and MySQL) Note metadata transported by messages but we use distributed database to implement Messaging Nodes NaradaBrokering nodes that form a scalable messaging substrate Main purpose is to serve as a message delivery mechanism between Managers and Service Adapters (Managees) in presence of varying network conditions Registry Read / Write from / to Registry via pre- determined TOPIC NB
41
41 Basic Management Architecture II Resources to Manage (Managee) If the resources DO NOT have a Web Service interface, we create a Service Adapter (a proxy that provides the Web Service interface as a wrapper over the basic management functionality of the resource). The Service Adapters connect to existing messaging nodes. This mainly leverages multi-protocol transport support in the messaging substrate. Thus, alternate protocols may be used when network policies cause connection failures Managers Active entities that manage the resources. May be multi-threaded to improve scalability (currently under further investigation) Managees Manager Registry … Read / Write from / to Registry via pre- determined TOPIC Service Adapter Resource NB
42
42 Architecture Use of Messaging Nodes Service adapters and Managers communicate through messaging nodes Direct connection possible, however This assumes that the service adapters are appropriately accessible from the machines where managers would run May require special configuration in routers / firewalls Typically managers and messaging nodes and registries are always in the same domain OR a higher level network domain with respect to service adapters Messaging Nodes (NaradaBrokering Brokers) provides A scalable messaging substrate Robust delivery of messages Secure end-to-end delivery
43
43 Architecture Bootstrapping Process The architecture is arranged hierarchically. Resources in different domains can be managed with separate policies for each domain A Bootstrapping service is run in every domain where the management architecture exists. Serves to ensure that the child domain bootstrap process are always up and running. Periodic heartbeats convey status of bootstrap service Bootstrap service periodically spawns a health-check manager that checks health of the system (ensures that the registry and messaging nodes are up and running and that there are enough managers for managees) Currently 1 manager per managee /ROOT /ROOT/FSU /ROOT/CGL Registry Hierarchical Bootstrap Nodes
44
44 Architecture: User Component Application-specific specification of the characteristics that the resources/services being managed, should maintain. Impacts Managee interface, registry and Manager Generic and Application specific policies are written to the registry where it will be picked up by a manager process. Updates to the characteristics (WS-Policy in future) are determined by the user. Events generated by the Managees are handled by the manager. Event processing is determined by policy (future work), E.g. Wait for user’s decision on handling specific conditions The event can be processed locally, so execute default policy, etc… Note Managers will set up services if registry indicates that is appropriate; so writing information to registry can be used to start up a set of services
45
45 Architecture Structure of Managers Manager process starts appropriate manager thread for the manageable resource in question Heartbeat thread periodically registers the Manager in registry SAM (Service Adapter Manager) Module Thread starts a Service/Resource Specific “Resource Manager” that handles the actual management task Management system can be extended by writing ResourceManagers for each type of Managee Manager Heartbeat Generator Thread SAM Module Resource Manager
46
46 Prototype We illustrate the architecture by managing the distributed messaging middleware, NaradaBrokering This example motivated by the presence of large number of dynamic peers (brokers) that need configuration and deployment in specific topologies Use WS – Management (June 2005) parts (WS – Transfer [Sep 2004], WS – Enumeration [Sep 2004] and WS – Eventing) (could use WS-DM) WS – Enumeration implemented but we do not foresee any immediate use in managing the brokering system WS – Transfer provides verbs (GET / PUT / CREATE / DELETE) which allow us to model setting and querying broker configuration, instantiating brokers and creating links between them and finally deleting brokers (tear down broker network) and re-deploy with possibly a different configuration and topology WS – Eventing (will be leveraged from the WS – Eventing capability implemented in OMII) WS – Addressing [Aug 2004] and SOAP v 1.2 used (needed for WS- Management) Used XmlBeans 2.0.0 for manipulating XML in custom container. WS-Context will replace current registry
47
47 Prototype Components Broker Service Adapter Note NB illustrates an electronic entity that didn’t start off with an administrative Service interface So add wrapper over the basic NB BrokerNode object that provides WS – Management front-end Also provides a buffering service to buffer undeliverable responses These will be retrieved later by a separate Request – Response message exchange Broker Network Manager WS – Management client component that is used to configure a broker object through the Broker Service Adapter Contains a Request-Response as well as Asynchronous messaging style capabilities Contains a topology generator component that determines the wiring between brokers (links that form a specific topology) For the purpose of prototype we simply create a CHAIN topology where each i th broker is connected to (i-1) st broker
48
48 Prototype Resources/Properties Modeled (very specific to NaradaBrokering) Resource URIOperationsDescription BROKER Create Delete Instantiates the broker with current configuration Deletes the broker node LINK (Note we manage brokers and streams) Create Delete Creates a link between two brokers Deletes the link between two brokers CONFIGURATION, CONFIGURATION PROPERTY Get Put Retrieves the current configuration / a single property Saves the specified configuration / single property NODE ADDRESS, GATEWAY ADDRESS Create Assigns a NODE / GATEWAY address to the current node if one is not already assigned
49
49 Response Time Handling Events (WS – Eventing) Test Resource which does not do any work other than responding to events This base model shows that up to 200 resources can be managed per manager process, beyond which response time increases rapidly This number is resource dependent and this result is illustrative. Equally dividing management between 2 processes, increases response time, although slowly.
50
50 Amount of Management Infrastructure Required N = Number of resources to manage N MP = Number of Manager processes If a manager process can manage 200 resources simultaneously, then N MP = N/200 N MN = Number of Messaging Nodes If a messaging node can support 800 simultaneous connections then N MN = (N + N/200 + 1) /800 1 connection is for registry
51
51 Amount of Management Infrastructure Required Management Infrastructure Required = N/200 + (N + N/200 + 1)/800 = N/160 (approximately) Thus, for N > 160, management is doable by adding (N/160) * 100 ----------------- (N + N/160) i.e. about, 0.625% more processes Thus Management architecture is scalable, and the approach is feasible
52
52 Prototype Recovery costs (Individual Resources – Brokers) Operation Time (msec) (average values) Un-Initialized (First time) Initialized (Later modifications) Set Configuration 111046.75 Create Broker 734132.75 Create Link 9443 Delete Link 10935.25 Delete Broker 110187.50 Time for Create Broker depends on the number & type of transports opened by the broker E.g. SSL transport requires negotiation of keys and would require more time than simply opening a TCP port
53
53 Recovery times I Use 5 msec read time per object from registry and consider two different topologies Ring Topology N nodes, N links (1 outgoing link per Node) Each Resource Management thread loads 2 objects (read from) and write their corresponding state (2 objects) (write to) REGISTRY. Time to load (theoretical) per broker ≈ 10 + 1110 + 734 + 94 = 1.9 sec Time to load (observed) = 2.4 to 2.7 secs (total)
54
54 Recovery times II Cluster Topology N nodes, Links per broker vary from 0 – 3 (depending on what level is the broker present) At Cluster level, we maintain a chain (hence num links is 1, and 0 for the last node) At Super cluster level, we again maintain a chain, so all nodes except the last node will have an additional link At Super-Super-Cluster level, we maintain a chain of super- cluster level nodes, so an additional link per node except for the last one in chain Each Resource Management thread loads from 1 – 4 objects (read from) and write their corresponding state (1 - 4 objects) (write to) REGISTRY. 1 object for Broker Node, others for links Thus, Time to load (theoretical) per broker ≈ {5 – 20} + 1110 + 734 + {0 – (94*1 + 43*2)} = 1.8 – 2.0 sec similar to ring topology
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.