Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dependability benchmarking for transactional and web systems Henrique Madeira University of Coimbra, DEI-CISUC Coimbra, Portugal.

Similar presentations


Presentation on theme: "Dependability benchmarking for transactional and web systems Henrique Madeira University of Coimbra, DEI-CISUC Coimbra, Portugal."— Presentation transcript:

1 Dependability benchmarking for transactional and web systems Henrique Madeira University of Coimbra, DEI-CISUC Coimbra, Portugal

2 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 2 Ingredients of a recipe to “bake” a dependability benchmark Measures Workload Faultload Procedure and rules (how to cook the thing) Dependability benchmark specification Document based only or Document + programs, tools,

3 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 3 Benchmark properties Representativeness Portability Repeatability Scalability Non-intrusiveness Easy to use Easy to understand

4 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 4 Benchmark properties Representativeness Portability Repeatability Scalability Non-intrusiveness Easy to use Easy to understand A benchmark is always an abstraction of the real world! It’s an imperfect and incomplete view of the world. Usefulness  improve things Agreement In practice…

5 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 5 The very nature of a benchmark Compare components, systems, architectures, configurations, etc. Highly specific: applicable/valid for a very well defined domain. Contribute to improve computer systems because you can compare alternative solutions. A real benchmark represents an agreement.

6 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 6 Three examples of dependability benchmarks for transactional systems 1. DBench-OLTP [DSN 2003 + VLDB 2003] u Dependability benchmark for OLTP systems (database centric) u Provided as a document structured in clauses (like TPC benchmarks) 2. Web-DB [SAFECOMP 2004] u Dependability benchmark for web servers u Provided as a set of ready-to-run programs and document-based rules 3. Security benchmark (first step) [DSN 2005] u Security benchmark for database management systems u Set of tests to database security mechanisms

7 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 7 The DBench-OLTP Dependability Benchmark (SUB) System Under Benchmarking (BMS) Benchmark Management System DBMS OS BM RTE FLE Workload Faultload Control Data + results Benchmark Target Workload and setup adopted from the TPC-C performance benchmark

8 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 8 Phase 1 Phase 2 Time Slot N Slot 1 Slot 2 Slot 3 Phase 1: Baseline performance measures (TPC-C measures) Phase 2: Performance measures in the presence of the faultload Dependability measures Injection time Recovery time Testing Slot (Start) Testing Slot (End) Steady state time Keep time Detection time Recovery start Fault activation Recovery end Steady state condition Data Integrity Testing Measurement Interval Benchmarking Procedure

9 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 9 Measures Baseline performance measures u tpmC – number of transactions executed per minute u $/tpmC – price per transaction Performance measures in the presence of the faultload u Tf – number of transactions executed per minute (with faults) u $/Tf – price per transaction (with faults) Dependability measures u AvtS – availability from the server point-of-view u AvtC – availability from the clients point-of-view u Ne – number of data integrity errors

10 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 10 Faultload Operator faults u Emulate database administrator mistakes Software faults u Emulate software bugs in the operating system High-level Hardware failures u Emulates hardware component failures

11 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 11 Examples of systems benchmarked SystemHardwareOperating SystemDBMSDBMS Config. A  Processor: Intel Pentium III 800 MHz  Memory: 256MB  Hard Disks: Four 20GB/7200 rpm  Network: Fast Ethernet Win2k Prof. SP 3Oracle 8i (8.1.7)Conf. 1 BWin2k Prof. SP 3Oracle 9i (9.0.2)Conf. 1 CWinXp Prof. SP 1Oracle 8i (8.1.7)Conf. 1 DWinXp Prof. SP 1Oracle 9i (9.0.2)Conf. 1 EWin2k Prof. SP 3Oracle 8i (8.1.7)Conf. 2 FWin2k Prof. SP 3Oracle 9i (9.0.2)Conf. 2 GSuSE Linux 7.3Oracle 8i (8.1.7)Conf. 1 HSuSE Linux 7.3Oracle 9i (9.0.2)Conf. 1 IRedHat Linux 7.3PostgreSQL 7.3- J  Processor: Intel Pentium IV 2 GHz  Memory: 512MB  Hard Disks: Four 20GB / 7200 rpm  Network: Fast Ethernet Win2k Prof. SP 3Oracle 8i (8.1.7)Conf. 1 KWin2k Prof. SP 3Oracle 9i (9.0.2)Conf. 1

12 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 12 DBench-OLTP benchmarking results Performance

13 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 13 DBench-OLTP benchmarking results Performance Availability

14 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 14 DBench-OLTP benchmarking results Performance Availability Price

15 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 15 Using DBench-OLTP to obtain more specific results Availability variation during the benchmark run Corresponds to about 32 hours of functioning in which the system have been subject of 97 faults. Each fault is injected in a 20 minutes injection slot. System is rebooted between slots

16 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 16 DBench-OLTP benchmarking effort Type of fault# of days TPC-C benchmark implementation10 (with reuse of code) DBench ‑ OLTP benchmark implementation 10 (first implementation) Benchmarking process execution3 (average per system)

17 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 17 The WEB-DB Dependability Benchmark (SUB) System Under Benchmarking (BMS) Benchmark Management System Web- Server OS SPECWeb Client Benchmark Target Fault injector Bench. Coordinator Availability tester Workload and setup adopted from the SPECWeb99 performance benchmark Workload Faultload Control Data + results

18 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 18 WEB-DB measures  Performance degradation measures: u SPECf : Main SPEC measure in the presence of the faultload u THRf : Throughput in the presence of the faultload (ops/s) u RTMf : Response time in the presence of the faultload (ms)  Dependability related measures u Availability: Percentage of time the server provides the expected service u Autonomy: Percentage of times the server recovered without human intervention (estimator of the self-healing abilities of the server) u Accuracy: Percentage of correct results yielded by the server

19 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 19  Network & hardware faults u Connection loss (server sockets are closed) u Network interface failures (disable + enable the interface)  Operator faults u Unscheduled system reboot u Abrupt server termination  Software faults u Emulation of common programming errors u Injected in the operating system (not in the web-server) WEB-DB faultloads

20 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 20 BL 123 res Benchmark procedure: 2 steps  Step 1: u Determine baseline performance (SUB + benchmark tools running workload without faults) u Tune workload for a SPEC conformance of 100%  Step 2: u 3 runs u Each run comprises all faults specified in the faultload u Bechmark results: the average of the 3 runs WEB-DB procedure Time Specweb Ramp Up + Ramp Down times. faults O.S. / Net / W.S. workload Web Srv. (BT) idle workload

21 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 21 Examples of systems benchmarked  Benchmark and compare the dependability of two common web-servers: u Apache web-server u Abyss web-server  When running on: u Win. 2000 u Win. XP u Win. 2003

22 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 22 AvailabilityAccuracyAutonomy Apache Abyss Dependability results

23 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 23 SPECfTHRfRTMf Apache Abyss Baseline performance - Apache: 31, 26, 30 - Abyss: 28, 25, 24 Performance in the presence of faults Performance degradation (%) - Apache: 55.4, 30.7, 62.3 - Abyss: 63.2, 45.2, 46.3

24 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 24 Security benchmark for database management systems … Client Application … … Client Application Client Application Web Browser Web Browser Application Server Web Server Client Application DBMS Network Key Layer

25 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 25 Security Attacks vs System Vulnerabilities Security attacks: u Intentional attempts to access or destroy data System vulnerabilities: u Hidden flaws in the system implementation u Features of the security mechanisms available u Configuration of the security mechanisms

26 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 26 Approach for the evaluation of security in DBMS Characterization of DBMS security mechanisms Our approach: 1) Identification of data criticality levels 2) Definition of database security classes 3) Identification of security requirements for each class 4) Definition of security tests for two scenarios: – Compare different DBMS – Help DBA assessing security in real installations

27 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 27 Database Security Classes DB Security ClassData Criticality LevelRequired Security Mechanisms Class 0None Class 1Level 1 - User authentication (internal or external) Class 2Level 2 - User authentication - User privileges (system and object privileges) Class 3Level 3 - User authentication - User privileges - Encryption in the data communication Class 4Level 4 - User authentication - User privileges - Encryption in the data communication - Encryption in the data storage Class 5Level 5 - User authentication - User privileges - Encryption in the data communication - Encryption in the data storage - Auditing

28 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 28 Requirements for DBMS Security Mechanisms Requirements Req. Weight (%) Req. Reference The system must provide internal user authentication by using usernames and passwords 101.1 The system must guarantee that, besides the DBA users, no other users can read/write to/from the table/file where the usernames and passwords are stored 61.2 The password must be encrypted during the communication between the client and the server during the authentication 61.3 The passwords must be encrypted in the table/file where they are stored 41.4 Internal user authentication (username/ password):

29 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 29 Measures and Scenarios Measures provided: u Security Class (SCL) u Security Requirements Fulfillment (SRF) Potential scenarios: u Compare different DBMS products u Help DBA assessing security in real installations

30 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 30 Comparing DBMS Security Set of tests to verify if the DBMS fulfill the security requirements Development of a database scenario: u Database model (tables) u Data criticality levels for each table u Database accounts corresponding to the several DB user profiles u System and object privileges for each account Network Oracle? DB2? PostgreSQL? System under development

31 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 31 Database scenario Database model: Level 1 Level 4 Level 3 Level 5 Level 2

32 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 32 Example: Comparative Analysis of Two DBMS Oracle 9i vs PostgreSQL 7.3 Security Mechanism # Req.Oracle 9iPostgreSQL Internal user authenticationALLOK External user authenticationALLOK User privileges 3.1OKNot OK 3.2OK 3.3OK 3.4OK 3.5OKNot OK Encryption in the data communication 4.1OKNot OK 4.2Depends on the methodNot OK Encryption in the data storage 5.1OKNot OK 5.2Not OK 5.3Not OK Auditing6.1OKNot OK

33 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 33 Results Summary Oracle 9i (encryption RC4, AES, and DES ) Oracle 9i (encryption 3DES) PostgreSQL 7.3 Security Class Class 5 Class 1 SRF metric 96%92%66% Oracle 9i does not fulfill all encryption requirements u 400% < performance degradation < 2700% PostgreSQL 7.3: u Some manual configuration is required to achieve Class 1 u High SRF for a Class 1 DBMS u Fulfills some Class 2 requirements

34 Henrique Madeira Workshop on Dependability Benchmarking, November 8, 2005 - Chicago, IL USA 34 Conclusions Comparing (components, systems, architectures, and configurations) is essential to improve computer systems  Benchmarks needed! Comparisons could be missleading  Benchmarks must be carefully validated! Two ways of having real benchmarks: u Industry agreement u User community (tacit agreement)


Download ppt "Dependability benchmarking for transactional and web systems Henrique Madeira University of Coimbra, DEI-CISUC Coimbra, Portugal."

Similar presentations


Ads by Google