Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bologna, 19th-20th February 20045th Plenary TAPAS Workshop JBoss Clustering and Configuration Service Implementation Giorgia Lodi

Similar presentations


Presentation on theme: "Bologna, 19th-20th February 20045th Plenary TAPAS Workshop JBoss Clustering and Configuration Service Implementation Giorgia Lodi"— Presentation transcript:

1 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop JBoss Clustering and Configuration Service Implementation Giorgia Lodi (lodig@cs.unibo.it)lodig@cs.unibo.it Department of Computer Science University of Bologna

2 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop2 Summary  Configuration Service  JBoss Clustering load balancing and fail-over mechanisms  Clustering Experiments  Current work and Future works  Concluding Remarks  References

3 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop3 Configuration Service (1/2)  Configuration service exercises “coarse- grained” configuration control It can manage such macro resources as host computers It will not be able to view and manage the activities of the resources at a finer granularity than that  JVM does not allow a high-level programmer to manage parameters such as CPU utilization, memory usage, and disk space usage

4 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop4 Configuration Service (2/2) It will not reserve and allocate a certain amount of CPU or memory or disk for a particular application It will not change the scheduler of the machine as well It is responsible for setting up the platform and distributing the load among the hosts

5 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop5 JBoss Clustering Service (1/7)  Clustering service useful for meeting such non-functional requirements as availability and scalability provides load-balancing and fail-over services

6 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop6 JBoss Clustering Service (2/7)  JBoss cluster: set of nodes each node: instance of JBoss AS several nodes in cluster can be grouped to form a “partition”  partition  identified by a unique name in cluster  partition name: defined in the AS configuration files a node may belong to one or more partitions (i.e., partitions may overlap)

7 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop7 JBoss Clustering Service (3/7)

8 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop8 JBoss Clustering Service (4/7)  JGroups open source project reliable group communication toolkit written in Java  Highly Available Partition (HAPartition) abstracts the communication layer provides access to basic communication primitives gives informational data (e.g. the cluster name, the name of the node, information about the membership of the cluster) two categories of primitives take place:  the state transfer  RPC calls

9 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop9 JBoss Clustering Service (5/7)  Distributed Replicant Manager (DRM) responsible for managing replicated objects through a given partition  assume to manage a list of stubs for a RMI server. DRM allows sharing these stubs in the cluster and knowing to which node a stub belongs  Distributed State Service (DS) manages replicated states (e.g. Stateful Session Bean states, HTTP sessions) allows sharing a set of dictionaries in the cluster

10 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop10 JBoss Clustering Service (6/7)  HA JNDI global, shared, cluster-wide JNDI Context used by clients when they want to lookup and bind objects  HA RMI responsible for implementing the smart proxies of the JBoss clustering  HA EJB provides mechanisms to cluster the EJBs (i.e. Stateless Session Bean, Stateful Session Bean, Entity Bean)  Message Driven Beans: no cluster version currently implemented by the JBoss 3.x

11 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop11 JBoss Clustering Service (7/7)  Supports both so-called “homogeneous” and “heterogeneous” deployment (in the cluster) homogeneous: each node contains the same beans heterogeneous: each node contains different set of beans

12 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop12 Homogeneous Deployment  Realized using JBoss farming service application copied into JBoss farm directory Jar Copy file in /farm Node 1 Node 2 Node 3 Cluster Jar

13 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop13 Heterogeneous Deployment  No available documentation for that  Realized defining to which node an EJB belongs  Not recommended distributing transaction is a problem  requires propagation of Tx Context and synchronization of the transaction monitors across nodes  requires distributed notifications  it is currently missing a distributed transaction manager it has deep performance impact  Conclusion (in every JBoss documentation) USE HOMOGENEOUS DEPLOYMENT!!

14 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop14 Load Balancing Policies (1/3)  JBoss adopts the third model motivations:  no single point of failure  load balancing activity can only die when client application dies  performance cost minimal (client pays the full price)

15 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop15  Defined at deployment time into Deployment Descriptors (DDs) Load Balancing Policies (2/3)

16 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop16 Load Balancing Policies (3/3)  Four load balancing strategies already included into JBoss clustering service Random-robin, Round-robin, First available, First available identical all proxies  Using the RMI mechanism (HA RMI) clients get references to remote EJB components using the RMI mechanism a stub (i.e. proxy) to objects is downloaded into the client  the proxy code includes the clustering logic (i.e. load balancing and fail-over)  the proxy contains the list of target nodes the client can access and the load balancing policy

17 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop17 Fail-over Mechanism  If the cluster topology changes the JBoss server will piggyback a new list of target nodes  The proxy, before returning the response to the client code unpacks the list of target nodes from the response updates the list with the new one and returns the real invocation result to the client code

18 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop18 Positioning the clustering logic  Clustering logic (i.e. load balancing and fail-over) located in the last interceptor of the client-side proxy Client Client JVM Invocation Handler Security TransactionClustered Interceptor Invokers to target nodes Run time generated interfaces

19 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop19 What we are investigating…  Currently, we are investigating use of homogeneous deployment use of notion of “partition” for configuration/reconfiguration purposes

20 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop20 Clustering Experiments (1/2)  Very simple application implemented DB EJB Container AccountManager StatementManager Account Statement Application Client Container Application Client Session BeansEntity Beans Entity relationship

21 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop21 Clustering Experiments (2/2) JBoss AS Account ApplicationAccountApplication Client Cluster

22 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop22 Clustering Experiments: Results  The state is correctly transferred among the nodes of the cluster  Each update is seen in every node of the cluster  Cluster membership correctly updated and seen by the cluster nodes  Fail-over guarantees that application instances continue to operate in survived nodes of the cluster

23 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop23 JBoss Clustering Limitations  Synchronization no distributed locking mechanisms for synchronization of concurrent Entity Beans  these beans can only be synchronized by using locking at the database level  Missing cluster-wide configuration management cluster administration: connect directly to each node’s JMX console  Load balancing current implementation embodies non-adaptive strategies, only (i.e. none of them considers dynamic load conditions of the machines in the cluster)

24 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop24 Current work (1/2)  Experimental assessment of the extent to which JBoss can be programmed, so as to distribute the computational load dynamically at run time extension to JBoss load balancing mechanism  integration of dynamic/adaptive load balancing strategies, to be defined at deployment time (for the time being) testbed: cluster of machines, running JBoss, which will be subjected to variable load conditions (e.g. use of ECPerf for simulation purposes)

25 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop25 Current work (2/2)  Configuration service driven, run time management of faulty/overloaded nodes assume application homogeneously deployed in JBoss (partition of) cluster (i.e., each node runs a full instance of the application) node failure  JBoss fail-over mechanism guarantees that surviving application instances continue to operate normally  in contrast, TAPAS configuration service guarantees that new node replaces the failed one (and state of failed node is restored)  motivation: assume partition consists of two nodes, only, …

26 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop26 Future works (1/2)  Current JBoss cluster used completely (i.e. all its nodes) when deploying the application (i.e. no dynamic Farming Service)  application components cannot be deployed in a sub-set of nodes of the initial cluster  TAPAS Configuration Service selects sub-set of nodes (of the cluster) on which deploying and running applications

27 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop27 Future works (2/2)  Geographical clustering evaluation of VPN technology to support geographically clustered AS experimental evaluation of geographically clustered AS

28 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop28 SLA Interpreter  Two phases pure parsing process using either SAX or DOM XML parsers  final result: Java object with as many attributes as the elements of the original XML document Java object processed again to obtain low-level QoS requirements (it may require statistical analysis)  Currently first phase (i.e. SLA parser) implemented  using DOM XML parser as applied in all JBoss source code  using old SLA version SLA file included into META-INF application directory with DDs

29 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop29 Concluding Remarks  SLA parser must be re-viewed with the new SLA  If possible, use of distributed transactions from Arjuna “overcome” JBoss problems for the heterogeneous deployment?

30 Bologna, 19th-20th February 20045th Plenary TAPAS Workshop30 References  JBoss group “Feature Matrix: JBossClustering (Rabbit Hole)”, 19th of March 2002.  S.Labourey and B.Burke “ JBoss Clustering 2nd Edition”, 2002.  http://www.javagroups.com/ http://www.javagroups.com/  G.Ferrari and G.Lodi “Implementing the TAPAS Architecture”, TAPAS Internal Draft, December 2003.  S. Labourey “Load Balancing and Failover in the JBoss Application Server”, 2001-2004 IEEE Task Force on Cluster Computing, Available at http://www.clusteringcomputing.org/ http://www.clusteringcomputing.org/  B.Burke and S.Lauborey “Clustering with JBoss 3.0”, ONJava.com, October 2002.


Download ppt "Bologna, 19th-20th February 20045th Plenary TAPAS Workshop JBoss Clustering and Configuration Service Implementation Giorgia Lodi"

Similar presentations


Ads by Google