Download presentation
Presentation is loading. Please wait.
Published byOswald Atkinson Modified over 9 years ago
1
Performance evaluation on grid Zsolt Németh MTA SZTAKI Computer and Automation Research Institute
2
Outline ● What is the grid? ● What is grid performance? ● Problems of performance evaluation ● WP3 ongoing work and further plans ● A proposed 'passive' benchmarking ● Proposed grid metrics ● Future directions
3
Distributed applications ● Process control? ● Security? ● Naming? ● Communication? ● Input / output? ● File access? Application: Cooperative processes Physical layer: Computational nodes
4
Distributed applications Application: Cooperative processes Physical layer: Computational nodes Virtual machine: ● Process control ● Security ● Naming ● Communication ● Input / output ● File access
5
● Distributed resources are virtually unified by a software layer ● A virtual machine is introduced between the application and the physical layer ● Provides a single system image to the application ● Types ● “Conventional” (PVM, some implementations of MPI) ● Grid (Globus, Legion) Conventional distributed environments and grids
6
Conventional environments Physical level Set of nodes (node=collection of resources) Login access Static Virtual machine Constructed on a priori information Processes Have resource requests Mapping Processes are mapped onto nodes Resource assignment is implicit
7
Grid Physical layer Virtual machine Resources are assigned to processes Consists of the selected resources Processes Have resource requirements Mapping Assign nodes to resources? Set of resources Shared Dynamic
8
Grid: the resource abstraction Physical layer Processes Have resource needs Resource abstraction Explicit mapping between virtual and physical resources Cannot be solved at user/application level
9
Grid: the user abstraction Physical layer Local, physical users (user accounts) Processes Belong to a user User of the virtual machine is authorised to use the constituting resources Have no login access to the node the resource belongs to User abstraction User of the virtual machine is temporarily mapped onto some local accounts Cannot be solved at user/application level
10
Fundamental grid functionalities ● By formal modeling the essential functionalities can be identified ● Resource abstraction ● Physical resources can be assigned to virtual resource needs (matched by properties) ● Grid provides a mapping between virtual and physical resources ● User abstraction ● User of the physical machine may be different from the user of the virtual machine ● Grid provides a temporal mapping between virtual and physical users
11
Conventional distributed environments and grids Smith 4 nodes Smith 4 CPU, memory, storage smith@n1.edu Smith 1 CPU smith@n1.edu smith@n2.edu p12@n2.edu griduser@n1.edu
12
What is grid performance at all? ● Traditionally ‘performance’ is ● Speed ● Throughput ● Bandwidth, etc. ● Using grids ● Quantitative reasons ● Qualitative reasons – QoS ● Economic aspects
13
Grid performance analysis 1. Performance is not characterisitic to an application itself rather to the interaction of the application and the infrastructure. 2. The more complex and dynamic nature of a grid introduces more possible performance flaws. 3. Usual metrics and characteristic parameters are not necessarily applicable for grids. 4. The larger event data volume needs careful reduction, feature extraction and intelligent presentation. 5. Due to the permanently changing environment, on-line and semi on- line techniques are advantageous over post mortem methods. 6. Performance tuning is more difficult due to dynamic environment and changing infrastructure. 7. Observation, comparison and analysis is more complex due to the diversity and heterogeneity of resources.
14
Interaction of application and the infrastructure ● Performance = application perf. infrastructure perf. ● Signature model (Pablo group) ● Application signature ● e.g. instructions/FLOPs ● Scaling factor (capabilities of the resources) ● e.g. FLOPs/seconds ● Execution signature: ● application signature * scaling factor ● E.g. instructions/second = instructions/FLOPS * FLOPs/seconds
15
Possible performance problems in grids ● All that may occur in a distributed application ● Plus ● Effectiveness of resource brokering ● Synchronous availability of resources ● Resources may change during execution ● Various local policies ● Shared use of resources ● Higher costs of some activities ● The corresponding symptoms must be characterised
16
Grid performance metrics ● Abstract representation of measurable quantities ● M=R 1 xR 2 x...R n ● Usual metrics ● Speedup, efficiency ● Queue length ● Such strict values are not characteristic in grid ● Cannot be interpreted ● Cannot be compared ● New metrics ● Local metrics and grid metrics ● Symbolic description / metrics
17
Processing monitoring information ● Trace data reduction ● Proportional to time t, processes P, metrics dimension n ● Statistical clustering (reducing P) ● Similar temporal behaviours are classified ● Questionnable if works for grids ● Representative processes are recorded for each class ● Statistical projection pursuit (reducing n) ● reduces the dimension by identifying significant metrics ● Sampling frequency (reducing t)
18
Performance tuning, optimisation ● The execution cannot be reproduced ● Post-mortem optimisation is not viable ● On-line steering is necessary though, hard to realise ● Sensors and actuators ● Application and implementation dependent ● E.g Autopilot, Falcon ● Average behaviour of applications can be improved ● Post-mortem tuning of the infrastructure (if possible) ● Brokering decisions ● Supporting services
19
Running benchmarks ● Benchmarks are executed on a virtual machine
20
Running benchmarks ● Benchmarks are executed on a virtual machine ● The virtual machine may change (composed of different resources) from run to run
21
Running benchmarks ● Benchmarks are executed on a virtual machine ● The virtual machine may change (composed of different resources) from run to run ● Benchmark result is representative to one certain virtual machine
22
Running benchmarks ● Benchmarks are executed on a virtual machine ● The virtual machine may change (composed of different resources) from run to run ● Benchmark result is representative to one certain virtual machine ● What can it show about the entire grid?
23
Benchmarking inside out ● Conventional benchmarking has a top-down view ● Assumes an unchanging infrastructure ● Cannot look behind the virtual level ● Not necessarily applicable to grids ● To look behind the virtual level a bottom-up view is necessary
24
Benchmarking inside out 1.There is a well defined set of benchmarks (e.g. NPB, Parkbench, etc.) 2.System administrators (resource owners) run them from time to time 3.Results are stored in a local database together with actual system parameters (CPU load, active users, etc.) 4.When a new virtual machine is formed, based on the current system parameters, a benchmark result can be estimated
25
Benchmarking inside out ● A more or less precise performance figure can be obtained prior to executing an application ● Does not consume resources ● Performance is related to the virtual machine actually formed for executing the application
26
Ongoing work ● Exploring the statistical properties of benchmarks and system parameters ● Exploring the way how benchmark results can be estimated from past measurements ● Finding a right set of benchmarks
27
Proposed grid metrics ● A well defined set of benchmarks can serve as metrics ● Multi-dimensional ● Comparable ● Interpretable ● Local resource metrics are transformed into global grid metrics
28
Proposed grid metrics ● Applications show statistical similarities to benchmarks ● Based on these similarity its signature can be created ● Application signature and resource signature can yield performance metrics ● Symbolic processing is advantageous
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.