Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Verification and Validation 1. Overarching SE principles  Rigor and formality  Separation of concerns  Modularity and decomposition  Abstraction.

Similar presentations


Presentation on theme: "Software Verification and Validation 1. Overarching SE principles  Rigor and formality  Separation of concerns  Modularity and decomposition  Abstraction."— Presentation transcript:

1 Software Verification and Validation 1

2 Overarching SE principles  Rigor and formality  Separation of concerns  Modularity and decomposition  Abstraction  Anticipation of change  Generality  Incrementality  Scalability  Compositionality  Heterogeneity 2

3 From Principles to Tools Principles Methods & Techniques Methodologies Tools 3

4 Verification vs validation  Verification: "Are we building the product right”.  The software should conform to its specification.  Validation: "Are we building the right product”.  The software should do what the user really requires. 4

5 V & V goals  Verification and validation should establish confidence that the software is fit for purpose  This does not mean completely free of defects  Rather, it must be good enough for its intended use  The type of use will determine the degree of confidence that is needed 5

6 V & V confidence  Depends on system’s purpose, user expectations and marketing environment  Software function The level of confidence depends on how critical the software is to an organisation  User expectations Users may have low expectations of certain kinds of software  Marketing environment Getting a product to market early may be more important than finding defects in the program 6

7  Defect testing and debugging are distinct processes  Verification and validation is concerned with establishing the existence of defects in a program  Debugging is concerned with locating and repairing these errors  Debugging involves formulating hypotheses about program behaviour then testing these hypotheses to find the system error Testing and debugging 7

8  Careful planning is required to get the most out of testing and inspection processes  Planning should start early in the development process  The plan should identify the balance between static verification and testing  Test planning is about defining standards for the testing process rather than describing product tests V & V planning 8

9 The V-model of development 9

10 Software inspections  Involve people examining the source representation with the aim of discovering anomalies and defects  Do not require execution of a system  May be used before implementation  May be applied to any representation of the system  Requirements, design, test data, etc.  Very effective technique for discovering errors  Many different defects may be discovered in a single inspection  In testing, one defect may mask another so several executions are required  Reuse of domain and programming knowledge  Reviewers are likely to have seen the types of error that commonly arise 10

11 Inspections and testing  Software inspections Concerned with analysis of the static system representation to discover problems (static verification)  May be supplement by tool-based document and code analysis.  Discussed in Chapter 15.  Software testing Concerned with exercising and observing product behaviour (dynamic verification)  The system is executed with test data and its operational behaviour is observed. 11

12  STATIC – Software inspections  Concerned with analysis of the static system representation to discover problems  May be supplement by tool-based document and code analysis  DYNAMIC – Software testing  Concerned with exercising and observing product behaviour  The system is executed with test data and its operational behaviour is observed Static and dynamic verification 12

13  Can reveal the presence of errors, not their absence  A successful test is a test which discovers one or more errors  The only validation technique for non-functional requirements  Should be used in conjunction with static verification to provide full V&V coverage Program testing 13

14  Defect testing  Tests designed to discover system defects.  A successful defect test is one which reveals the presence of defects in a system.  Statistical testing  Tests designed to reflect the frequency of user inputs  Used for reliability estimation  To be covered in the next lecture Types of testing 14

15 Inspections vs. testing 15

16 Software inspections  These involve people examining the source representation with the aim of discovering anomalies and defects.  Inspections not require execution of a system so may be used before implementation.  They may be applied to any representation of the system (requirements, design,configuration data, test data, etc.).  They have been shown to be an effective technique for discovering program errors. 16

17 Advantages of inspections  During testing, errors can mask (hide) other errors. Because inspection is a static process, you don’t have to be concerned with interactions between errors.  Incomplete versions of a system can be inspected without additional costs. If a program is incomplete, then you need to develop specialized test harnesses to test the parts that are available.  An inspection can also consider broader quality attributes of a program, such as compliance with standards, portability and maintainability. 17

18 Inspections and testing  Inspections and testing are complementary and not opposing verification techniques.  Both should be used during the V & V process.  Inspections can check conformance with a specification but not conformance with the customer’s real requirements.  Inspections cannot check non-functional characteristics such as performance, usability, etc. 18

19 Program inspections  Formalised approach to document reviews  Intended explicitly for defect DETECTION (not correction)  Defects may be  logical errors  anomalies in the code that might indicate an erroneous condition (e.g. an uninitialized variable)  non-compliance with standards 19

20 Inspection procedure  System overview presented to inspection team  Code and associated documents are distributed to inspection team in advance  Inspection takes place and discovered errors are noted  Modifications are made to repair discovered errors  Re-inspection may or may not be required 20

21 Inspection teams  Made up of at least 4 members  Author of the code being inspected  Inspector who finds errors, omissions and inconsistencies  Reader who reads the code to the team  Moderator who chairs the meeting and notes discovered errors  Other roles are Scribe and Chief moderator 21

22 Inspection checklists  Checklist of common errors should be used to drive the inspection  Error checklist is programming language dependent  The 'weaker' the type checking, the larger the checklist  Examples  Initialisation  Constant naming  Loop termination  Array bounds 22

23 An inspection checklist (a) Fault classInspection check Data faults  Are all program variables initialized before their values are used?  Have all constants been named?  Should the upper bound of arrays be equal to the size of the array or Size -1?  If character strings are used, is a delimiter explicitly assigned?  Is there any possibility of buffer overflow? Control faults  For each conditional statement, is the condition correct?  Is each loop certain to terminate?  Are compound statements correctly bracketed?  In case statements, are all possible cases accounted for?  If a break is required after each case in case statements, has it been included? Input/output faults  Are all input variables used?  Are all output variables assigned a value before they are output?  Can unexpected inputs cause corruption? 23

24 An inspection checklist (b) Fault classInspection check Interface faults  Do all function and method calls have the correct number of parameters?  Do formal and actual parameter types match?  Are the parameters in the right order?  If components access shared memory, do they have the same model of the shared memory structure? Storage management faults  If a linked structure is modified, have all links been correctly reassigned?  If dynamic storage is used, has space been allocated correctly?  Is space explicitly deallocated after it is no longer required? Exception management faults  Have all possible error conditions been taken into account? 24

25 Inspection rate  500 statements/hour during overview  125 source statement/hour during individual preparation  90-125 statements/hour can be inspected  Inspection is therefore an expensive process  Inspecting 500 lines costs about 40 person/hours 25

26 Automated static analysis  Static analyzers are software tools for source text processing  They parse the program text and try to discover potentially erroneous conditions and bring these to the attention of the V & V team  Very effective as an aid to inspections  A supplement to but not a replacement for inspections 26

27 Stages of static analysis  Control flow analysis  Checks for loops with multiple exit or entry points, finds unreachable code, etc.  Data use analysis  Detects uninitialized variables, variables written twice without an intervening assignment, variables which are declared but never used, etc.  Interface analysis  Checks the consistency of routine and procedure declarations and their use 27

28 Stages of static analysis (2)  Information flow analysis  Identifies the dependencies of output variables  Does not detect anomalies, but highlights information for code inspection or review  Path analysis  Identifies paths through the program and sets out the statements executed in that path  Also potentially useful in the review process  Both these stages generate vast amounts of information  Handle with caution! 28

29 Static Analysis Example 29 1.public class LoadBalancer extends Component{ 2. Long curLoad; 3. Long lmt; 4. Long preLmt; 5. public void handle(Event e){ 6. if(e.getName().equals("NewRequest")){ 7. manageCurrentLoad(e.getRequest()); 8. } 9. if(e.getName().equals(“SetLimit")){ 10. preLmt = lmt; 11. lmt = Long.parseLong(e.getAttr("Limit")); 12. if(lmt < preLmt) 13. send(new Event("LimitReduced")); 14. } 15. } 16. public void manageCurrentLoad(Request r){ 17. if(curLoad < lmt){ 18. curLoad = curLoad + 1; 19. //Start processing new request 20. } 21. else 22. send(new Event("LoadTooHigh")); 23. } 24.} ICFG

30 Software quality  Quality simply means that a product should meet its specification.  This is problematical for software systems  There is a tension between customer quality requirements (efficiency, reliability, etc.) and developer quality requirements (maintainability, reusability, etc.);  Some quality requirements are difficult to specify in an unambiguous way;  Software specifications are usually incomplete and often inconsistent.  The focus may be ‘fitness for purpose’ rather than specification conformance. 30

31 Software fitness for purpose  Has the software been properly tested?  Is the software sufficiently dependable to be put into use?  Is the performance of the software acceptable for normal use?  Is the software usable?  Is the software well-structured and understandable?  Have programming and documentation standards been followed in the development process? 31

32 Non-functional characteristics  The subjective quality of a software system is largely based on its non-functional characteristics.  This reflects practical user experience – if the software’s functionality is not what is expected, then users will often just work around this and find other ways to do what they want to do.  However, if the software is unreliable or too slow, then it is practically impossible for them to achieve their goals. 32

33 Software quality attributes SafetyUnderstandabilityPortability SecurityTestabilityUsability ReliabilityAdaptabilityReusability ResilienceModularityEfficiency RobustnessComplexityLearnability 33

34 Software Qualities  Qualities (a.k.a. “ilities”) are goals in the practice of software engineering  External vs. Internal qualities  Product vs. Process qualities 34

35 External vs. Internal Qualities  External qualities are visible to the user  reliability, efficiency, usability  Internal qualities are the concern of developers  they help developers achieve external qualities  verifiability, maintainability, extensibility, evolvability, adaptability 35

36 Product vs. Process Qualities  Product qualities concern the developed artifacts  maintainability, understandability, performance  Process qualities deal with the development activity  products are developed through process  maintainability, productivity, timeliness 36

37 Some Software Qualities  Correctness  ideal quality  established w.r.t. the requirements specification  absolute  Reliability  statistical property  probability that software will operate as expected over a given period of time  relative 37

38 Some Software Qualities (cont.)  Robustness  “reasonable” behavior in unforeseen circumstances  subjective  a specified requirement is an issue of correctness; an unspecified requirement is an issue of robustness  Usability  ability of end-users to easily use software  extremely subjective 38

39 Some Software Qualities (cont.)  Understandability  ability of developers to easily understand produced artifacts  internal product quality  subjective  Verifiability  ease of establishing desired properties  performed by formal analysis or testing  internal quality 39

40 Some Software Qualities (cont.)  Performance  equated with efficiency  assessable by measurement, analysis, and simulation  Evolvability  ability to add or modify functionality  addresses adaptive and perfective maintenance  problem: evolution of implementation is too easy  evolution should start at requirements or design 40

41 Some Software Qualities (cont.)  Reusability  ability to construct new software from existing pieces  must be planned for  occurs at all levels: from people to process, from requirements to code  Interoperability  ability of software (sub)systems to cooperate with others  easily integratable into larger systems  common techniques include APIs, plug-in protocols, etc. 41

42 Some Software Qualities (cont.)  Scalability  ability of a software system to grow in size while maintaining its properties and qualities  assumes maintainability and evolvability  goal of component-based development 42

43 Some Software Qualities (cont.)  Heterogeneity  ability to compose a system from pieces developed in multiple programming languages, on multiple platforms, by multiple developers, etc.  necessitated by reuse  goal of component-based development  Portability  ability to execute in new environments with minimal effort  may be planned for by isolating environment-dependent components  necessitated by the emergence of highly-distributed systems (e.g., the Internet)  an aspect of heterogeneity 43

44 Software Process Qualities  Process is reliable if it consistently leads to high- quality products  Process is robust if it can accommodate unanticipated changes in tools and environments  Process performance is productivity  Process is evolvable if it can accommodate new management and organizational techniques  Process is reusable if it can be applied across projects and organizations 44

45 Assessing Software Qualities  Qualities must be measurable  Measurement requires that qualities be precisely defined  Improvement requires accurate measurement  Currently most qualities are informally defined and are difficult to assess 45

46 Software Engineering “Axioms”  Adding developers to a project will likely result in further delays and accumulated costs  Basic tension of software engineering  better, cheaper, faster — pick any two  functionality, scalability, performance — pick any two  The longer a fault exists in software  the more costly it is to detect and correct  the less likely it is to be properly corrected  Up to 70% of all faults detected in large-scale software projects are introduced in requirements and design  detecting the causes of those faults early may reduce their resulting costs by a factor of 100 or more 46

47 Reviews and inspections  A group examines part or all of a process or system and its documentation to find potential problems.  Software or documents may be 'signed off' at a review which signifies that progress to the next development stage has been approved by management.  There are different types of review with different objectives  Inspections for defect removal (product);  Reviews for progress assessment (product and process);  Quality reviews (product and standards). 47

48 Quality reviews  A group of people carefully examine part or all of a software system and its associated documentation.  Code, designs, specifications, test plans, standards, etc. can all be reviewed.  Software or documents may be 'signed off' at a review which signifies that progress to the next development stage has been approved by management. 48

49 Phases in the review process  Pre-review activities  Pre-review activities are concerned with review planning and review preparation  The review meeting  During the review meeting, an author of the document or program being reviewed should ‘walk through’ the document with the review team.  Post-review activities  These address the problems and issues that have been raised during the review meeting. 49

50 The software review process 50

51 Distributed reviews  The processes suggested for reviews assume that the review team has a face-to-face meeting to discuss the software or documents that they are reviewing.  However, project teams are now often distributed, sometimes across countries or continents, so it is impractical for team members to meet face to face.  Remote reviewing can be supported using shared documents where each review team member can annotate the document with their comments. 51

52 Measuring software qualities 52

53 Measuring software  A quality metric should be a predictor of product quality.  Classes of product metric  Dynamic metrics which are collected by measurements made of a program in execution;  Static metrics which are collected by measurements made of the system representations;  Dynamic metrics help assess efficiency and reliability  Static metrics help assess complexity, understandability and maintainability. 53

54 Dynamic and static metrics  Dynamic metrics are closely related to software quality attributes  It is relatively easy to measure the response time of a system (performance attribute) or the number of failures (reliability attribute).  Static metrics have an indirect relationship with quality attributes  You need to try and derive a relationship between these metrics and properties such as complexity, understandability and maintainability. 54

55 Static software product metrics Software metricDescription Fan-in/Fan-outFan-in is a measure of the number of functions or methods that call another function or method (say X). Fan-out is the number of functions that are called by function X. A high value for fan-in means that X is tightly coupled to the rest of the design and changes to X will have extensive knock-on effects. A high value for fan-out suggests that the overall complexity of X may be high because of the complexity of the control logic needed to coordinate the called components. Length of codeThis is a measure of the size of a program. Generally, the larger the size of the code of a component, the more complex and error-prone that component is likely to be. Length of code has been shown to be one of the most reliable metrics for predicting error-proneness in components. 55

56 Static software product metrics Software metricDescription Cyclomatic complexityThis is a measure of the control complexity of a program. This control complexity may be related to program understandability. I discuss cyclomatic complexity in Chapter 8. Length of identifiersThis is a measure of the average length of identifiers (names for variables, classes, methods, etc.) in a program. The longer the identifiers, the more likely they are to be meaningful and hence the more understandable the program. Depth of conditional nesting This is a measure of the depth of nesting of if-statements in a program. Deeply nested if-statements are hard to understand and potentially error-prone. Fog indexThis is a measure of the average length of words and sentences in documents. The higher the value of a document’s Fog index, the more difficult the document is to understand. 56

57 …and even more static metrics Object-oriented metric Description Weighted methods per class (WMC) This is the number of methods in each class, weighted by the complexity of each method. Therefore, a simple method may have a complexity of 1, and a large and complex method a much higher value. The larger the value for this metric, the more complex the object class. Complex objects are more likely to be difficult to understand. They may not be logically cohesive, so cannot be reused effectively as superclasses in an inheritance tree. Depth of inheritance tree (DIT) This represents the number of discrete levels in the inheritance tree where subclasses inherit attributes and operations (methods) from superclasses. The deeper the inheritance tree, the more complex the design. Many object classes may have to be understood to understand the object classes at the leaves of the tree. Number of children (NOC) This is a measure of the number of immediate subclasses in a class. It measures the breadth of a class hierarchy, whereas DIT measures its depth. A high value for NOC may indicate greater reuse. It may mean that more effort should be made in validating base classes because of the number of subclasses that depend on them. 57

58 …and some more Object-oriented metric Description Coupling between object classes (CBO) Classes are coupled when methods in one class use methods or instance variables defined in a different class. CBO is a measure of how much coupling exists. A high value for CBO means that classes are highly dependent, and therefore it is more likely that changing one class will affect other classes in the program. Response for a class (RFC) RFC is a measure of the number of methods that could potentially be executed in response to a message received by an object of that class. Again, RFC is related to complexity. The higher the value for RFC, the more complex a class and hence the more likely it is that it will include errors. Lack of cohesion in methods (LCOM) LCOM is calculated by considering pairs of methods in a class. LCOM is the difference between the number of method pairs without shared attributes and the number of method pairs with shared attributes. The value of this metric has been widely debated and it exists in several variations. It is not clear if it really adds any additional, useful information over and above that provided by other metrics. 58


Download ppt "Software Verification and Validation 1. Overarching SE principles  Rigor and formality  Separation of concerns  Modularity and decomposition  Abstraction."

Similar presentations


Ads by Google