WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

Slides:



Advertisements
Similar presentations
A Lightweight Platform for Integration of Mobile Devices into Pervasive Grids Stavros Isaiadis, Vladimir Getov University of Westminster, London {s.isaiadis,
Advertisements

Remus: High Availability via Asynchronous Virtual Machine Replication
Database System Concepts and Architecture
Database Architectures and the Web
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 9 Distributed Systems Architectures Slide 1 1 Chapter 9 Distributed Systems Architectures.
GridRPC Sources / Credits: IRISA/IFSIC IRISA/INRIA Thierry Priol et. al papers.
Approaches to EJB Replication. Overview J2EE architecture –EJB, components, services Replication –Clustering, container, application Conclusions –Advantages.
Agent Caching in APHIDS CPSC 527 Computer Communication Protocols Project Presentation Presented By: Jake Wires and Abhishek Gupta.
Distributed Processing, Client/Server, and Clusters
CoreGRID Workpackage 5 Virtual Institute on Grid Information and Monitoring Services Authorizing Grid Resource Access and Consumption Erik Elmroth, Michał.
Distributed Systems Architectures
A Grid Resource Broker Supporting Advance Reservations and Benchmark- Based Resource Selection Erik Elmroth and Johan Tordsson Reporter : S.Y.Chen.
Software Engineering and Middleware: a Roadmap by Wolfgang Emmerich Ebru Dincel Sahitya Gupta.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 17 Client-Server Processing, Parallel Database Processing,
Institute of Computer Science AGH Performance Monitoring of Java Web Service-based Applications Włodzimierz Funika, Piotr Handzlik Lechosław Trębacz Institute.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 12 Slide 1 Distributed Systems Design 1.
DIANE Overview Germán Carrera, Alfredo Solano (CNB/CSIC) EMBRACE COURSE Monday 19th of February to Friday 23th. CNB-CSIC Madrid.
Cracow Grid Workshop 2003 Institute of Computer Science AGH A Concept of a Monitoring Infrastructure for Workflow-Based Grid Applications Bartosz Baliś,
1 Autonomic Computing An Introduction Guenter Kickinger.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
Database Architectures and the Web Session 5
1 Dr. Markus Hillenbrand, ICSY Lab, University of Kaiserslautern, Germany A Generic Database Web Service for the Venice Service Grid Michael Koch, Markus.
CGW 2003 Institute of Computer Science AGH Proposal of Adaptation of Legacy C/C++ Software to Grid Services Bartosz Baliś, Marian Bubak, Michał Węgiel,
AHM /09/05 AHM 2005 Automatic Deployment and Interoperability of Grid Services G.Kecskemeti, Yonatan Zetuny, G.Terstyanszky,
Oracle10g RAC Service Architecture Overview of Real Application Cluster Ready Services, Nodeapps, and User Defined Services.
Optimized Java computing as an application for Desktop Grid Olejnik Richard 1, Bernard Toursel 1, Marek Tudruj 2, Eryk Laskowski 2 1 Université des Sciences.
Web services: Why and How OOPSLA 2001 F. Curbera, W.Nagy, S.Weerawarana Nclab, Jungsook Kim.
An Introduction to Software Architecture
9/5/2012ISC329 Isabelle Bichindaritz1 Web Database Environment.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
SUMA: A Scientific Metacomputer Cardinale, Yudith Figueira, Carlos Hernández, Emilio Baquero, Eduardo Berbín, Luis Bouza, Roberto Gamess, Eric García,
20 October 2006Workflow Optimization in Distributed Environments Dynamic Workflow Management Using Performance Data David W. Walker, Yan Huang, Omer F.
A Proposal of Application Failure Detection and Recovery in the Grid Marian Bubak 1,2, Tomasz Szepieniec 2, Marcin Radecki 2 1 Institute of Computer Science,
Cracow Grid Workshop, October 27 – 29, 2003 Institute of Computer Science AGH Design of Distributed Grid Workflow Composition System Marian Bubak, Tomasz.
Workflow Early Start Pattern and Future's Update Strategies in ProActive Environment E. Zimeo, N. Ranaldo, G. Tretola University of Sannio - Italy.
OMIS Approach to Grid Application Monitoring Bartosz Baliś Marian Bubak Włodzimierz Funika Roland Wismueller.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
SCALABLE EVOLUTION OF HIGHLY AVAILABLE SYSTEMS BY ABHISHEK ASOKAN 8/6/2004.
Distributed Computing Systems CSCI 4780/6780. Distributed System A distributed system is: A collection of independent computers that appears to its users.
95-843: Service Oriented Architecture 1 Master of Information System Management Service Oriented Architecture Lecture 7: BPEL Some notes selected from.
Distributed Computing Systems CSCI 4780/6780. Geographical Scalability Challenges Synchronous communication –Waiting for a reply does not scale well!!
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved RPC Tanenbaum.
ServiceSs, a new programming model for the Cloud Daniele Lezzi, Rosa M. Badia, Jorge Ejarque, Raul Sirvent, Enric Tejedor Grid Computing and Clusters Group.
Server to Server Communication Redis as an enabler Orion Free
Distribution and components. 2 What is the problem? Enterprise computing is Large scale & complex: It supports large scale and complex organisations Spanning.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
Shuman Guo CSc 8320 Advanced Operating Systems
Federating PL-Grid Computational Resources with the Atmosphere Cloud Platform Piotr Nowakowski, Marek Kasztelnik, Tomasz Bartyński, Tomasz Gubała, Daniel.
© FPT SOFTWARE – TRAINING MATERIAL – Internal use 04e-BM/NS/HDCV/FSOFT v2/3 JSP Application Models.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Aneka Cloud ApplicationPlatform. Introduction Aneka consists of a scalable cloud middleware that can be deployed on top of heterogeneous computing resources.
CSC 480 Software Engineering Lecture 17 Nov 4, 2002.
1 Channel Access Concepts – IHEP EPICS Training – K.F – Aug EPICS Channel Access Concepts Kazuro Furukawa, KEK (Bob Dalesio, LANL)
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
A service Oriented Architecture & Web Service Technology.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Distributed Systems Architectures Chapter 12. Objectives  To explain the advantages and disadvantages of different distributed systems architectures.
Chapter 1 Characterization of Distributed Systems
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
CSC 480 Software Engineering
Database Architectures and the Web
Introduction to Databases Transparencies
Inventory of Distributed Computing Concepts
An Introduction to Software Architecture
Prof. Leonardo Mostarda University of Camerino
Component-based Applications
Presentation transcript:

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak, and Michał Węgiel Institute of Computer Science / ACC CYFRONET AGH Cracow, Poland

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Outline  Introduction - motivation & objectives  System architecture – static model (components and their relationships)  System operation – dynamic model (scenarios and activities)  System characteristics  Migration framework (implementation)  Performance evaluation  Use case & summary

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Introduction  Legacy software Validated and optimized code Follows traditional process-based model of computation (language & system dependent) Scientific libraries (e.g. BLAS, LINPACK)  Service oriented architecture (SOA) Enhanced interoperability Language-independent interface (WSDL) Execution within system-neutral runtime environment (virtual machine)

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Objectives  Originally: adaptation of the OCM-G to GT 3.0  After generalization: design of a versatile architecture enabling for bridging between legacy software and SOA implementation of a framework providing tools facilitating the process of migration to SOA SM LM Node Site LM Node OMIS Grid Service Tool OMIS

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Related Work  Lack of comprehensive solutions  Existing approaches possess numerous limitations and fail to meet grid requirements  Kuebler D., Einbach W.: Adapting Legacy Applications as Web Services (IBM) Main disadvantages: insecurity & inflexibility ServiceServer Web Service Container AdapterClient

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Roadmap Introduction - motivation & objectives  System architecture – static model (components and their relationships)  System operation – dynamic model (scenarios and activities)  System characteristics  Migration framework (implementation)  Performance evaluation  Use case & summary

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 General Architecture Hosting Environment Registry Factory Proxy Factory Legacy System Master Service Requestor ServiceProcess Instance Proxy Instance Slave Monitor SOAP

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Service Requestor  From client’s perspective, cooperation with legacy systems is fully transparent  Only two services are accessible: factory and instance; the others are hidden  Standard interaction pattern is followed: First, a new service instance is created Next, method invocations are performed Finally, the service instance is destroyed  We assume a thin client approach

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Legacy System ( 1/4 )  Constitutes an environment in which legacy software resides and is executed  Responsible for actual request processing  Hosts three types of processes: master, monitor and slave, which jointly provide a wrapper encapsulating the legacy code  Fulfills the role of network client when communicating with hosting environment (thus no open ports are introduced and process migration is possible)

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Legacy System ( 2/4 ) Legacy System Master Slave Monitor creates controls responsible for host registration and creation of monitor and slave processes one per host permanentprocess

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Legacy System ( 3/4 ) Legacy System Master Slave Monitor creates controls responsible for reporting about and controlling the associated slave process one per client transientprocess

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Legacy System ( 4/4 ) Legacy System Master Slave Monitor creates controls provides means of interface- based stateful conversation with legacy software one per client transientprocess

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Hosting Environment ( 1/5 )  Maintains a collection of grid services which encapsulate interaction with legacy systems  Provides a layer of indirection shielding the service requestors from collaboration with backend hosts  Responsible for mapping between clients and slave processes (one-to-one relationship)  Mediates communication between service requestors and legacy systems

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Hosting Environment ( 2/5 ) Hosting Environment Registry Factory Proxy Factory Instance Proxy Instance one per service keeps track of backend hosts which registered to participate in computations permanent services transient services

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Hosting Environment ( 3/5 ) Hosting Environment Registry Factory Proxy Factory Instance Proxy Instance permanent services transient services one per service responsible for creation of the correspondinginstances

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Hosting Environment ( 4/5 ) Hosting Environment Registry Factory Proxy Factory Instance Proxy Instance permanent services transient services one per client directly called by client, provides externally visible functionality

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Hosting Environment ( 5/5 ) Hosting Environment Registry Factory Proxy Factory Instance Proxy Instance permanent services transient services one per client responsible for mediation between backend host and service client

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Roadmap Introduction - motivation & objectives System architecture – static model (components and their relationships)  System operation – dynamic model (scenarios and activities)  System characteristics  Migration framework (implementation)  Performance evaluation  Use case & summary

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Resource Management ( 1/2 )  Resources = processes (master/monitor/slave)  Registry service maintains a pool of master processes which can be divided into: static part – configured manually by site administrators (system boot scripts) dynamic part – managed by means of job submission facility (GRAM)  Optimization: coarse-grained allocation and reclamation performed in advance in the background (efficiency, smooth operation)

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Resource Management ( 2/2 )  Coarse-grained resource = master process  Fine-grained resource = monitor & slave process Registry Master Monitor/Slave Information Services Data Management Job Submission Resource Broker Coarse-Grained Allocation (c) Fine-Grained Allocation (f) c.1 c.2 c.3 c.4 c.5 f.1 f.2

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Invocation patterns  Apart from synchronous and sequential mode of method invocation our solution supports: 1.Asynchronism – assumed to be embedded into legacy software; our approach: invocation returns immediately and a separate thread is blocked on a complementary call waiting for the output data to appear 2.Concurrency – slave processes handle each client request in a separate thread 3.Transactions - the most general model of concurrent nested transactions is assumed

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Legacy Side Scenarios ( 1/2 ) 1.Client assignment - master process repetitively volunteers to participate in request processing (reporting host CPU load). When registry service assigns a client before timeout occurs, new monitor and slave processes are created. 2.Request processing – embraces: input retrieval, request processing and output delivery. 3.System self-monitoring - monitor process periodically reports to proxy instance about the status of the slave process and current CPU load statistics (both system- and slave-related).

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Legacy Side Scenarios ( 2/2 ) RegistryMaster Assign Monitor Slave [success] Assign [timeout] Create Proxy Instance Create Heartbeat [continue] Heartbeat [migration] Assign [timeout] Destroy Request Response Request Response

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Client Side Scenarios ( 1/2 ) 1.Instance construction - involves two steps: Creation of the associated proxy instance, Assignment of one of the currently registered master processes. 2.Method invocation - client call is forwarded to the proxy instance, from where it is fetched by the associated slave process; the requestor is blocked until the response arrives. 3.Instance destruction - destruction request is forwarded to the associated proxy instance.

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Client Side Scenarios ( 2/2 ) Factory Proxy Instance Instance Proxy FactoryRegistry Create New Create New Assign Invoke Destroy

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Process Migration ( 1/5 )  Indispensable when we need to: dynamically offload work onto idle machines (automatic load-balancing) silently mask recovery from system failures (transparent fail-over)  Challenges: state extraction & reconstruction  Low-level approach Suitable only for homogeneous environment (e.g. cluster of workstations) Supported by our solution since legacy systems act as clients rather than servers

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Process Migration ( 2/5 )  High-level approach Can be employed in heterogeneous environment State restoration is based on the combination of checkpointing and repetition of the short-term method invocation history Requires additional development effort (state serialization, snapshot dumping and loading)  Proxy instance initiates high-level recovery upon detection of failure (lost heartbeat) or overload  Only slave and monitor processes are transferred onto another computing node

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Process Migration ( 3/5 )  Selection of optimal state reconstruction scenario is based on transaction flow and checkpoint sequence (multiple state snapshots are recorded and the one enabling for fastest recovery procedure is chosen) Committed Aborted Committed Aborted Committed Aborted Unfinished Check pointFailure point Transaction omitted Transaction repeated Time

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004  CPU load generated by slave process (as reported by monitor process) is approximated as a function of time and used to estimate the cost of invocations Process Migration ( 4/5 ) c – total cost f – frequency l – CPU load t – time

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Process Migration ( 5/5 )  In case of concurrent method invocations, emulation of synchronization mechanisms employed on the client side is necessary Timing data is gathered (method invocation start & end timestamps), If two operations overlapped in time, they are executed concurrently (otherwise sequentially).  Prerequisite: repeatable invocations (unless system state was changed, in response to the same input data identical results are expected to be obtained).

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Roadmap Introduction - motivation & objectives System architecture – static model (components and their relationships) System operation – dynamic model (scenarios and activities)  System characteristics  Migration framework (implementation)  Performance evaluation  Use case & summary

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 System Features ( 1/3 )  Non-functional requirements: QoS-related (the fashion that service provisioning takes place in): performance & dependability, TCO-related (expenses incurred by system maintenance): scalability & expandability.  Efficiency – coarse-grained resource allocation; pool of master processes always reflects actual needs; algorithms have linear time complexity; checkpointing and transactions jointly allow for selection of optimal recovery scenario.

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 System Features ( 2/3 )  Availability – fault-tolerance based on both low- level and high-level process migration; failure detection and self-healing; checkpointing allows for robust error recovery; in the worst case A = 50% (when the whole call history needs to be repeated we have MTTF = MTTR ).  Security – no open incoming ports on backend hosts are introduced; authentication of legacy systems is possible; we rely upon the grid security infrastructure provided by the container.

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 System Features ( 3/3 )  Scalability - processing is highly distributed and parallelized (all tasks are always delegated to legacy systems); load balancing is guaranteed (by registry and proxy instance); job submission mechanism is exploited (resource brokering).  Versatility - no assumptions are made as regards programming language or run-time platform; portability; non-intrusiveness (no legacy code alteration needed); standards- compliance and interoperability.

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Migration Framework ( 1/2 )  Code-named L2G (Legacy To Grid)  Based on GT 3.2 (hosting environment) and gSOAP 2.6 (legacy system)  Objective: to facilitate the adaptation of legacy C/C++ software to GT 3.2 services by automatic code generation (with particular emphasis on ease of use and universality)  Structural and operational compliance with the proposed architecture  Served as a proof of concept of our solution

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Migration Framework ( 2/2 )  Most typical development cycle: 1.Programmer specifies the interface that will be exposed by the deployed service (Java) 2.Source code generation takes place (Java/C++/XML/shell scripts) 3.Programmer provides the implementation for the methods on legacy system side (C++)  Support for process migration, checkpointing, transactions, MPI (parallel machine consists of multiple slave processes one of which is in charge of communication with proxy instance)

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Roadmap Introduction - motivation & objectives System architecture – static model (components and their relationships) System operation – dynamic model (scenarios and activities) System characteristics Migration framework (implementation)  Performance evaluation  Use case & summary

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Performance evaluation ( 1/5 )  Benchmark: comparison of two functionally equivalent grid services (the same interface) one of which was dependent on legacy system  Both services were exposing a single operation: int length (String s);  Time measurement was performed on the client side; all components were located on a single machine; no security mechanism was employed; relative overhead was estimated

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Performance evaluation ( 2/5 ) Measurement results for method invocation time = length/bandwidth + latency

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Performance evaluation ( 3/5 ) Measurement results for instance construction time = iterations/throughput

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Performance evaluation ( 4/5 ) Measurement results for instance destruction time = iterations/throughput

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Performance evaluation ( 5/5 ) Increased 2.5 x37.8 ms15.4 msLatency Reduced 2.5 x370.4 kB/s909.1 kB/sBandwidth Relative changeLegacy serviceOrdinary serviceQuantity Reduced 2.1 x12.2 iterations/s25.4 iterations/sDestruction Reduced 3.1 x2.0 iterations/s6.2 iterations/sConstruction Relative changeLegacy serviceOrdinary serviceScenario  Instance construction and destruction  Method invocation

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Use Case: OCM-G  Grid application monitoring system composed of two components: Service Manager (SM) and Local Monitor (LM), compliant to OMIS interface LM SM Node LM Node Site Slave Proxy Instance Instance SOAP MCI

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Summary  We elaborated a universal architecture enabling to integrate legacy software into the grid services environment  We demonstrated how to implement our concept on the top of existing middleware  We developed a framework (comprising a set of the command line tools) which automates the process of migration of C/C++ codes to GT 3.2  Further work: WSRF, message-level security, optimizations, support for real-time applications

WS on Component Models and Systems for Grid Applications, St Malo, June 26, see also and More info