Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Framework to Evaluate Intelligent Environments Chao Chen Supervisor: Dr. Sumi Helal Mobile & Pervasive Computing Lab CISE Department April 21, 2007.

Similar presentations


Presentation on theme: "A Framework to Evaluate Intelligent Environments Chao Chen Supervisor: Dr. Sumi Helal Mobile & Pervasive Computing Lab CISE Department April 21, 2007."— Presentation transcript:

1 A Framework to Evaluate Intelligent Environments Chao Chen Supervisor: Dr. Sumi Helal Mobile & Pervasive Computing Lab CISE Department April 21, 2007

2 Motivation  Mark Weiser ’ s Vision  ‘‘ The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it …’’ Scientific American, 91  An increasing number of deployment in the past 16 years: lab: Gaia, GatorTech SmartHouse, Aware home, etc. real world: iHospital …  The Big Question: Are we there yet?  Our research community need a ruler: quantitative metrics, a benchmark (suite), common set scenarios...

3 Conventional Performance Evaluation  Performance evaluation is never a new idea  Evaluation parameters:  System throughput, Transmission rate, Responsive time, …  Evaluation approaches:  Test bed  Simulation / Emulation  Theoretical model (Queueing theory, Petri net, Markov chain, Monte Carlo simulation … )  Evaluation tools:  Performance monitoring: MetaSim Tracer (memory), PAPI, HPCToolkit, Sigma++ (memory), DPOMP (OpenMP), mpiP, gprof, psrun, …MetaSimPAPI HPCToolkit  Modeling/analysis/prediction: MetaSim Convolver (memory), DIMEMAS(network), SvPablo (scalability), Paradyn, Sigma++, …MetaSim DIMEMASSvPabloParadyn  Runtime adaptation: ActiveHarmony, SALSAActiveHarmonySALSA  Simulation : ns-2 (network), netwiser (network), …

4 All d é j à vu again?  When it comes to pervasive computing, questions emerge:  Same set of parameters?  Is conventional tools sufficient?  I have tons of performance data, now what?  It is not feasible to bluntly apply conventional evaluation methods for hardware, database or distributed systems to pervasive computing systems.  Pervasive computing systems are heterogeneous, dynamic, and heavily context dependent. Evaluation of PerCom systems require new thinking.

5 Related work  Performance evaluations in related area  Atlas, University of Florida. Metrics: Scalability (memory usage / number of sensors)  one.world, University of Washington. Metrics: Throughput (tuples / time, tuples / senders)  PICO, University of Texas at Arlington. Metrics: Latency (Webcast latency / duration) We are measuring different things, applying different metrics, evaluating systems of different architecture.

6 Challenges  Pervasive computing systems are diverse.  Performance metrics: A panacea for all?  Taxonomy: a classification of PerCom systems.

7 Taxonomy Systems Perspective Users Perspective CentralizedDistributed StationaryMobile (Application domain) (User-interactivity) (Geographic span) Mission-criticalAuxiliaryRemedial Body-areaBuildingUrban computing ProactiveReactive Performance Factors Scalability Heterogeneity Consistency / Coherency Communication cost / performance, Resource constraints Energy Size/Weight Responsiveness Throughput Transmission rate Failure rate Availability Safety Privacy & Trust Context Sentience Quality of context User intention prediction … / / / / / //

8 Outline  Taxonomy  Common Set of Scenarios  Evaluation Metrics

9 A Common Set of Scenarios  Re-defining research goals:  A variety of understanding and interpretation of pervasive computing  What researchers design may not be exactly what users expect  Evaluating pervasive computing systems is a process involving two steps:  Are we building the right thing? (Validation)  Are we building things right? (Verification)  A common set of scenarios defines:  the capacities a PerCom system should have  The parameters to be examined when evaluating how well these capacities are achieved.

10 Common Set Scenarios  Settings: Smart House  Scenario:  Plasma burnt out  System capabilities:  Service composability  Fault resilience  Heterogeneity compliance  Performance parameters:  Failure rate  Availability  Recovery time

11 Common Set Scenarios  Settings: Smart Office  Scenario:  Real-time location tracking  System overload  Location prediction  System capabilities:  Adaptivity  Proactivity  Context sentience  Performance parameters:  Scalability  Quality of Context (refreshness & precision)  Prediction rate

12 Parameters  Taxonomy and common set scenarios enable us identify performance parameters.  Observation:  Quantifiable vs. non-quantifiable parameters  Parameters does not contribute equally to overall performance  Performance metrics:  Quantifiable parameters: measurement  Non-quantifiable: analysis & testing  Parameters may have different “ weights ”.

13 Quantifiable ParametersCharacteristics System- related Parameters System performanceNode-level characteristics Communication performance & costService and Application Software footprintContext characteristics Power profilesSecurity and Privacy Data storage and manipulationEconomical considerations Quality of contextKnowledge representation Programming efficiencyArchitectural characteristics Reliability and fault-toleranceAdaptivity characteristics ScalabilityStandardization characteristics Adaptivity and self-organization by measurement by survey of user Usability- related Parameters EffectivenessAcceptanceFunctionalities PerformanceNeedModality Learning CurveExpectation Interface to backend and peer systems Measurement regarding Users ’ Effort Knowledge/experienceDummy Compliance Correctness of user intention prediction Attitude toward technology

14 Conclusion & Future work  Contributions  performed a taxonomy on existing pervasive computing systems  proposed a set of common scenarios as an evaluating benchmark  Identified the evaluation metrics (a set of parameters) for pervasive computing systems.  With parameters of performance listed, can we evaluate/measure them? How?  A test bed + reality measurement - expensive, difficult to set-up/maintain, replay difficult  Simulation/Emulation + reduced cost, quick set-up, consistent replay, safe - not reality, needs modeling and validation  Theoretical Model: abstraction of pervasive space on a higher level Analytical Empirical

15 Thank you!


Download ppt "A Framework to Evaluate Intelligent Environments Chao Chen Supervisor: Dr. Sumi Helal Mobile & Pervasive Computing Lab CISE Department April 21, 2007."

Similar presentations


Ads by Google