Download presentation
Presentation is loading. Please wait.
Published byShawn Carter Modified over 9 years ago
1
Chapter 4: Systems Development & Maintenance Activities
2
PARTICIPANTS Systems professionals End users Stakeholders ACCOUNTANTS & AUDITORS Internal IT
3
ACCOUNTANTS/AUDITORS Why are accountants/auditors involved? Experts in financial transaction processes Quality of AIS is determined in SDLC How are accountants involved? Users (e.g., user views and accounting techniques) Members of SDLC development team (e.g., Control Risk being minimized) Auditors (e.g., auditable systems, auditing with specific technique)
4
IS Development In-house development staffs Purchase commercial systems General Rule: Never build if you can acquire a system that will provide 80% of your needs. Qstn: When would you want to build your own system?
5
TRENDS IN COMMERCIAL SOFTWARE Trends in commercial software Relatively low cost for general purpose software Industry-specific vendors Businesses too small to have in- house IS staff Downsizing & DDP (cloud computing)
6
Turnkey systems (alpha and beta testing) General accounting systems Typically in modules Special-purpose systems Example banking Office automation systems Purpose is to improve productivity Enterprise systems (ERP)ERP SAP, Peoplesoft, Baan, Oracle Vendor-supported systems Hybrids (custom system, commercial software) TYPES OF COMMERCIAL SYSTEMS
7
Office automation systems
8
Vendor-supported systems Healthcare
9
Advantages Implementation time Cost Reliability Disadvantages Independence Customization needs Maintenance COMMERCIAL SYSTEMS
10
SYSTEMS DEVELOPMENT LIFE CYCLE (SDLC) Some company may satisfy its information needs by purchasing commercial software and develop other system in-house. Both are necessary and enhanced by formal procedures that lend structure to the decision making process SDLC “best practices” for system development.
11
SYSTEMS DEVELOPMENT LIFE CYCLE (SDLC) New systems 1. Systems planning 2. Systems analysis 3. Conceptual systems design 4. System evaluation and selection 5. Detailed design 6. System programming and testing 7. System implementation 8. System maintenance SDLC -- Figure 4-1 [p.141]
12
SYSTEMS DEVELOPMENT LIFE CYCLE (SDLC)
13
PURPOSE: To link individual systems projects to the strategic objectives of the firm. Link individual projects to strategic objectives of the firm - Figure 4-2 [p.142] Who does it? Steering committee CEO, CFO, CIO, senior mgmt., auditors, external parties Ethics and auditing standards limit when auditors can serve on this committee Long-range planning: 3-5 years Allocation of resources – broad (=strategic budget & other resources allocation: system resources hr, hw,sw, telco) SYSTEMS PLANNING– PHASE I
14
To link individual systems projects to the strategic objectives of the firm
15
SYSTEMS PLANNING-PHASE I Level 1 = Strategic systems planning Why? 1. A changing plan is better than no plan 2. Reduces crises in systems development 3. Provides authorization control for SDLC 4. It works! Level 2 = Project planning Project proposal Project schedule
16
Project Proposal Report What you have found Recommendations Financially feasible Problem Definition Nature of the problem: Separate problem from symptoms of problem Scope of the project: Establish boundaries.. Budget and schedule Objectives of the project: What user thinks system should do
17
Resulting Management Decision Drop Fix a simple problem Authorize the analysis phase
18
Auditor’s role in systems planning Auditability Security Controls Reduce the risks (unneeded, inefficient, ineffective and fraudulent system) SYSTEMS PLANNING-PHASE I
19
Identify user’s needs Preparing proposals Evaluating proposals Prioritizing individual projects Scheduling work Project Plan – allocates resources to specific project Project Proposal – Go or not Project Schedule – represents mgmt’s commitment SYSTEMS PLANNING-PHASE I SUMMARY
20
PURPOSE: Effectively identify and analyze the needs of the users for the new system. Data Gathering And / Survey Written documents. Reviewing key documents (see list, p. 147, 182) Interviews Structured Unstructured Questionnaires (Iterations are needed) Observation Visits by appointment Participant observation Sampling SYSTEMS ANALYSIS- PHASE II
21
Survey step Disadvantages: Tar pit syndrome Thinking inside the box Advantages: Identify aspects to keep Forcing analysts to understand the system Isolating the root of problem symptoms
22
Data sources Users Data stores Processes Data flows Controls Transaction volumes Error rates Resource costs Bottlenecks Redundant operations Gathering facts SYSTEMS ANALYSIS- PHASE II
23
Systems analysis report Figure 4-3 (p.148) Auditor’s role CAATTs (e.g., embedded modules) Advanced audit features cannot be added easily to the existing systems due to technical (3GL: Cobol does not support ALE) auditor should analyze what is best suited for the systems SYSTEMS ANALYSIS- PHASE II
25
PURPOSE : Develop alternative systems that satisfy system requirements identified during system analysis 1. Top-down (structured design) [see Figure 4-4, p.150] Designs general rather than specific Enough details for design to demonstrate differences Example: Figure 4-5, p. 151 2. Object-oriented approach (OOD) Reusable objects Creation of modules (library, inventory of objects) 3. Auditor’s role special auditability features CONCEPTUAL SYSTEMS DESIGN- PHASE III
28
SDLC Major system aspects Centralized or distributed Online or batch PC-based? How will input be captured? Necessary reports
29
SDLC Make or buy decision Packaged software Meet at least 75% of requirements? Change business procedures for part or all of remainder? Customize for part of all of remainder? Custom software Programmers write code Outsourcing System is developed by external organization
30
SDLC Build a prototype Limited working system of subset Does not need true functionality Output looks like anticipated system output Working model that can be modified and fine- tuned Uses high-level software tools – CASE (Computer-Aided Software Engineering) Best for small-scale systems
31
SDLC Presentation All alternatives Selected plan Prototype of the system Obtain authorization to proceed
32
PURPOSE: Process that seeks to identify the optimal solution from the alternatives 1. Perform detailed feasibility study Technical feasibility [existing IT or new IT?] Economic feasibility Legal feasibility (SOX and SAS 109 : privacy and confidentiality of information) Operational feasibility Degree of compatibility between the firm’s existing procedures and personnel skills, and requirements of the new system Schedule feasibility [implementation] 2. Perform a cost-benefit analysis Identify costs Identify benefits Compare the two SYSTEM EVALUATION & SELECTION– PHASE IV
33
ONE-TIME COSTS: Hardware acquisition Site preparation Software acquisition Systems design Programming Testing Data conversion Training RECURRING COSTS: Hardware maintenance Software maintenance Insurance Supplies Personnel Allocated existing IS SYSTEM EVALUATION & SELECTION-PHASE IV Cost-Benefit Analysis: Costs
34
TANGIBLE: Increased revenues Increased sales in existing markets Expansion into new markets Cost Reduction 1 Labor reduction Operating cost reduction Supplies overhead Reduced inventories Less expensive eqpt. Reduced eqpt. maint. INTANGIBLE 2 : Increased customer satisfaction Improved employee satisfaction More current information Improved decision making Faster response to competitors’ actions More effective operations Better internal and external communications Improved control environment SYSTEM EVALUATON & SELECTION–PHASE IV Cost-Benefit Analysis: Benefits
35
NPV 1 [Table 4-4] NPV of Benefits (over life of system) – NPV costs (over life of system) = NPV If NPV > 0, economically feasible When choosing between projects, choose the one with the greatest NPV Payback 2 [Figures 4-7a, 7b] Cost-Benefit Analysis: Comparison
37
PURPOSE: Produce a detailed description of the proposed system that satisfies system requirements identified during systems analysis and is in accordance with conceptual design. User views ( Output requirements & Input requirements ) Files and databases Systems processing Systems controls and backup DETAILED DESIGN–PHASE V
38
Quality Assurance “Walkthrough” The process of inspecting algorithms and source code by following paths through the algorithms or code as determined by input conditions and choices made along the way Algorithms : A process or set of rules to be followed in calculations or other problem-solving operations, esp. by a computer exp: a basic algorithm for division (9:3 = 3 9:3=2 IF THEN ELSE) Quality assurance group (programmers, system analysts, users and internal auditors)
39
DETAILED DESIGN – PHASE V Detailed Design Report Designs for input screens and source documents Designs for screen outputs, reports, operational documents Normalized database Database structures and diagrams Data flow diagrams (DFD’s) Database models (ER, Relational) Data dictionary Processing logic (flow charts)
40
SYSTEM PROGRAMMING & TESTING– PHASE VI Program the Application Procedural languages (3GLs: COBOL, C) Event-driven languages (VB aka GUI) OO languages (Java) Hybrid (C++ : to bridge 3GL and OOP) Programming the system Test the application {Figure 4-8] Testing methodology Testing offline before deploying online Test data Why? Can provide valuable future benefits
41
PURPOSE: Database structures are created and populated with data, applications are coded and tested, equipment is purchased and installed, employees are trained, the system is documented, and the new system is installed. Testing the entire system (modules are tested as a whole system) Documenting the system Designer and programmer documentation Operator documentation (run manual) User documentation (user skills varied: online tutorials, help features) SYSTEMS IMPLEMENTATION– PHASE VII
43
The transfer of data from its current form to the format or medium required by the new system Converting the databases Validation Reconciliation Backup Converting the new system Auditor involvement virtually stops! Cold turkey cutover (big bang) Phased cutover (modular) Parallel operation cutover SYSTEMS IMPLEMENTATION– PHASE VII Conversion
45
Reviewed by independent team to measure the success of the system Systems design adequacy [see list p. 170, 203] Accuracy of time, cost, and benefit estimates [see list p. 170, 204] Auditor’s role? SYSTEMS IMPLEMENTATION– PHASE VII Post-Implementation Review
46
Provide technical expertise AIS: GAAP, GAAS, SEC, IRS Legal Social / behavioral IS/IT (if capable) Effective and efficient ways to limit application testing Specify documentation standards Verify control adequacy COSO – SAS No. 78 – PCAOB Standard #1 Impact on scope of external auditors SYSTEMS IMPLEMENTATION– PHASE VII Auditors’ Role
47
PURPOSE : Changing systems to accommodate changes in user needs 80/20 rule Importance of documentation? Facilitate efficient changes SYSTEMS MAINTENANCE–PHASE VIII
48
Preliminary Feasibility Project Authorization Systems Planning Systems Analysis Conceptual Design Systems Selection Detailed Design System Implementation Project Proposal Project Schedule System Analysis Rpt DFD (general) ER Diagram Relational Model Normalized Data Feasibility Study Cost-Benefit Analysis System Selection Rpt Detailed Design Rpt Program Flowcharts Post-Impl. Review Documentation User Acceptance Rpt DFD (Detail)
49
A materially flawed financial application will eventually corrupt financial data, which will then be incorrectly reported in the financial statements. Therefore, the accuracy and integrity of the IS directly affects the accuracy of the client’s financial data A properly functioning system development process ensures only needed application are created, properly specified and have adequate controls and thoroughly tested before implemented The system maintenance process ensures that only legitimate changes are made to applications and are tested before implemented If else, application testing and substantive testing is in place CONTROLLING & AUDITING THE SDLC
50
Systems authorization activities (economic justification and feasibility) User specification activities (involvement) Technical design activities Documentation is evidence of controls Documentation is a control! Internal audit participation (control and liaison users and system pro) User test and acceptance procedures (assurance group) Audit objectives (p.206) Audit procedures (p.206) CONTROLLING & AUDITING THE SDLC Controlling New Systems Development
51
Audit objectives Verify SDLC activities are applied consistently and in accordance with management’s policies Verify original system is free from material errors and fraud Verify system necessary and justified Verify documentation adequate and complete Audit procedures How verify SDLC activities applied consistently? How verify system is free from material errors and fraud? How verify system is necessary? How verify system is justified? How verify documentation is adequate and complete? See page 174 for a list CONTROLLING & AUDITING THE SDLC Audit Objectives & Procedures
52
Four minimum controls: Formal authorization Technical specifications Retesting Updating the documentation When maintenance cause extensive changes to program logic additional control such as involvement of internal auditor and user, acceptance testing may be needed CONTROLLING & AUDITING THE SDLC Controlling Systems Maintenance
53
Source program library (SPL) controls Why? What trying to prevent? Unauthorized access Unauthorized program changes SPLMS [Figure 4-13, p. 177] SPLMS Controls Storing programs on the SPL Retrieving programs for maintenance purposes Detecting obsolete programs Documenting program changes (audit trail) CONTROLLING & AUDITING THE SDLC Controlling Systems Maintenance
54
Password control On a specific program Separate test libraries Audit trail and management reports Describing software changes Program version numbers Controlling access to maintenance [SPL] commands CONTROLLING & AUDITING THE SDLC Controlled SPL Environment
55
Audit objectives Detect any unauthorized program changes Verify that maintenance procedures protect applications from unauthorized changes Verify applications are free from material errors Verify SPL are protected from unauthorized access CONTROLLING & AUDITING THE SDLC Audit Objectives & Procedures
56
Audit procedures Figure 4-14, p.179 Identify unauthorized changes Reconcile program version numbers Confirm maintenance authorization Identify application errors Reconcile source code [after taking a sample] Review test results Retest the program Testing access to libraries Review programmer authority tables Test authority table CONTROLLING & AUDITING THE SDLC Audit Objectives & Procedures
58
End Chapter 4: Systems Development & Maintenance Activities
59
Evaluating Asset safeguarding & Data Integrity Auditor attempts to determine whether assets could be destroyed, damaged or used for unauthorized purposes and how well the completeness, soundness (a state or condition free from damage or decay ), purity and veracity (truth) of data are maintained
60
How to evaluate that? The evaluation process involves the auditor making a complex global judgment using evidence collected on the strength and weakness of internal control (IC) How to evaluate IC? Common measures are: the dollar (or other currency) lost (asset), quantity error (data)
61
Dynamic System Since a system of IC usually contains stochastic elements the measures should be expressed probabilistically Both qualitative and quantitative approach can be used when making evaluation decision
62
How to evaluate that? Qualitative risk assessment—Ranks threats by nondollar values and is based more on scenario, intuition, and experience Quantitative risk assessment—Deals with dollar amounts. It attempts to assign a cost (monetary value) to the elements of risk assessment and the assets and threats of a risk analysis.
63
Example 1 NIST 800-26, a document that uses confidentiality, integrity, and availability as categories for a loss. It then rates each loss according to a scale of low, medium or high. A rating of low, medium, or high is subjective. In this example, the following categories are defined: Low—Minor inconvenience; can be tolerated for a short period of time but will not result in financial loss. Medium—Can result in damage to the organization, cost a moderate amount of money to repair, and result in negative publicity. High—Will result in a loss of goodwill between the company, client, or employee; may result in a large legal action or fine, or cause the company to significantly lose revenue or earnings
64
Example 1 The flipside is when performing a qualitative assessment is that you are not working with dollar values; therefore, this lacks the rigor (detail) that accounting teams and management typically prefer.
65
Example 1 The types of qualitative assessment techniques include these: The Delphi Technique: a structured communication technique, originally developed as a systematic, interactive forecasting method which relies on a panel of experts. Facilitated Risk Assessment Process (FRAP): A subjective process that obtains results by asking a series of questions. It places risks into one of 26 categories. FRAP is designed to be completed in a matter of hours, making it a quick process to perform.
66
Example 2 Performing a quantitative risk assessment involves quantifying all elements of the process, including asset value, impact, threat frequency, safeguard effectiveness, safeguard costs, uncertainty, and probability.
67
How to quantify? Determine the asset value (AV) for each information asset. Identify threats to the asset. Determine the exposure factor (EF) for each information asset in relation to each threat. Calculate the single loss expectancy (SLE). Calculate the annualized rate of occurrence (ARO). Calculate the annualized loss expectancy (ALE)
68
Some considerations The advantage of a quantitative risk assessment is that it assigns dollar values, which is easy for management to work with and understand. However, a disadvantage of a quantitative risk assessment is that it is also based on dollar amounts. Consider that it’s difficult, if not impossible, to assign dollar values to all elements. Therefore, some qualitative measures must be applied to quantitative elements. Even then, this is a huge responsibility; therefore, a quantitative assessment is usually performed with the help of automated software tools
69
STEP BY STEP Determine the exposure factor: This is a subjective potential percentage of loss to a specific asset if a specific threat is realized. This is usually in the form of a percentage, similar to how weather reports predict the likelihood of weather conditions
70
Calculate the single loss expectancy (SLE): The SLE value is a dollar figure that represents the organization’s loss from a single loss or the loss of this particular information asset. Single Loss Expectancy = Asset Value × Exposure Factor Items to consider when calculating the SLE include the physical destruction or theft of assets, loss of data, theft of information, and threats that might delay processing.
71
Assign a value for the annualized rate of occurrence (ARO): The ARO represents the estimated frequency at which a given threat is expected to occur. Simply stated, how many times is this expected to happen in one year?
72
Assign a value for the annualized loss expectancy (ALE): The ALE is an annual expected financial loss to an organization’s information asset because of a particular threat occurring within that same calendar year. Annualized Loss Expectancy (ALE) = Single Loss Expectancy (SLE) × Annualized Rate of Occurrence (ARO) The ALE is typically the value that senior management needs to assess to prioritize resources and deter-mine what threats should receive the most attention
73
Analyze the risk to the organization—The final step is to evaluate the data and decide to accept, reduce, or transfer the risk
74
Data Integrity Controls Referential integrity guarantees that all foreign keys reference existing primary keys Controls in most databases should prevent the primary key from being deleted when it islinked to existing foreign keys Entity integrity the primary keys are names of banks entity integrity ensures that each tuple contains a primary key. Without the capability to associate each primary key with a bank, entity integrity cannot be maintained and the database is not intact
75
Data Integrity Controls
76
ADVISORS Advisor No.Last NameFirst NameOffice No. 1418HowardGlen420 1419MeltonAmy316 1503ZhangXi202 1506RadowskiJ.D.203 STUDENTS Student IDLast NameFirst NamePhone No. Advisor No. 333-33-3333SimpsonAlice333-33331418 111-11-1111SandersNed444-44441418 123-45-6789MooreArtie555-55551503 Advisor No. is a foreign key in the STUDENTS table. Every incident of Advisor No. in the STUDENTS table either matches an instance of the primary key in the ADVISORS table or is null.
77
Note that within each table, there are no duplicate primary keys and no null primary keys. Consistent with the entity integrity rule.
78
Evaluating information system effectiveness and efficiency Why study effectiveness? Problems have arisen or criticisms have been voiced in connection with a system; Some indicators of the ineffectiveness of the hardware and software being used may prompt the review; Management may wish to implement a system initially developed in one division throughout the organization, but may want to first establish its effectiveness; Post-implementations review to determines whether new system is meeting its objectives
79
Indicators of System Ineffectiveness excessive down time and idle time slow system response time excessive maintenance costs inability to interface with new hardware/software unreliable system outputs slow system response time data loss excessive run costs frequent need for program maintenance and modification user dissatisf. with output format, content or timeliness
80
System Quality! Measures of System Quality typically focus on performance characteristics of the system under study Quality: Ease of use, Interface consistency, Maintainability, Response Time, System Reliability
81
Two approaches to measurement of system effectiveness Goal-centered view - does system achieve goals set out? Conflicts as to priorities, timing etc. can lead to objectives met in the short run by sacrificing fundamental system qualities, leading to long run decline of effectiveness of the system System resource view - desirable qualities of a system are identified and their levels are measured. If the qualities exist, then information system objectives, by inference, should be met. By measuring the qualities of the system may get a better, longer-term view of a system's effectiveness. The main problem– measuring system qualities is much more difficult than measuring goal achievement.
82
2 Types of Evaluations for System Effectiveness Relative evaluation - auditor compares the state of goal accomplish. after the system implemented, with the state of goal accomplishment before system implemented. Improved task accomplishment, and Improved quality of working life. Absolute evaluation - the auditor assesses the size of the goal accomplish. after the system has been implemented. Operational effectiveness, Technical effectiveness, and Economic effectiveness.
83
Task Accomplishment - an effective I/S improves the task accomplishment of its users. Providing specific measures of past accomplishment that auditor can use to evaluate IS is difficult. Performance measures for task accomplishment differ across applications and sometimes across organizations. For a manufacturing control system might be: number of units output, number of defective units reworked, units scrapped amount of down time/idle time. Important to trace task accomplishment over time. System may appear to have improved for a short time after implementation, but fall into disarray thereafter.
84
Quality of Working Life High quality of working life for users of a system is a major objective in the design process. Unfortunately, there is less agreement on the definition and measurement of the concept of quality of working life. Different groups have different vested interests - some productivity, some social
85
Operational Effectiveness Objectives Auditor examines how well a system meets its goals from the viewpoint of a user who interacts with the system on a regular basis. Four main measures: Frequency of use, Nature of use, Ease of use, and User satisfaction.
86
Frequency and Nature of Use Frequency - employed widely, but problematic sometimes a high quality system leads to low frequency of use because the system permits more work to be accomplished in a shorter period of time. sometimes a poor quality system leads to a low frequency of use since users dislike the system Nature - can use systems in many ways lowest level: treat as black box providing solutions highest level: use to redefine how tasks, jobs performed and viewed
87
Ease of Use and User Satisfaction Ease of use - positive correlation betw. users' feelings about systems and the degree to which the systems were easy to use. In evaluating ease of use, it is important to identify the primary and secondary users of a system. Terminal location, flexibility of reporting, ease of error correction User satisfaction - has become an important measure of operational effectiveness because of the difficulties and problems associated with measures of frequency of use, nature of use, and ease of use. problem finding, problem solving, input, processing, report form
88
Technical Effectiveness Objectives - Has the appropriate hardware and software technology been used to support a system, or, whether a change in the support hardware or software technology would enable the system to meet its goals better. Hardware performance can be measured using hardware monitors or more gross measures such as system response time, down time. Software effectiveness can be measured by examining the history of program maintenance, modification and run time resource consumption. The history of program repair maintenance indicates the quality of logic existing in a program; i.e., extensive error correction implies: inappropriate design, coding or testing; failure to use structured approaches, etc. Major problem: hardware and software not independent
89
Economic Effectiveness Objectives - Requires the identification of costs and benefits and the proper evaluation of costs and benefits - a difficult task since costs and benefits depend on the nature of the IS. For example, some of the benefits expected and derived from an IS designed to support a social service environment would differ significantly from a system designed to support manufacturing activities. Some of the most significant costs and benefits may be intangible and difficult to identify, and next to impossible to value.
90
Comparison of Audit Approaches Effectiveness audit - express an opinion on whether a system achieves the goals set for the system. These goals may be quite broad or specific. (having the right system Quality to ensure the right executions met the right tasks to met the right goals) Audits of system efficiency - whether maximum output is achieved at minimum cost or with minimum input assuming a given level of quality.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.