Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 7: Computer-Assisted Audit Techniques [CAATs]

Similar presentations


Presentation on theme: "Chapter 7: Computer-Assisted Audit Techniques [CAATs]"— Presentation transcript:

1 Chapter 7: Computer-Assisted Audit Techniques [CAATs]
IT Auditing & Assurance, 2e, Hall & Singleton IT Auditing & Assurance, 2e, Hall & Singleton

2 CLASSES OF INPUT CONTROLS
Source document controls Data coding controls Batch controls Validation controls Input error correction Generalized data input systems IT Auditing & Assurance, 2e, Hall & Singleton

3 SOURCE DOCUMENT CONTROLS
Controls in systems using physical source documents Source document fraud To control for exposure, control procedures are needed over source documents to account for each one Use pre-numbered source documents Use source documents in sequence Periodically audit source documents Source Document Controls – in systems that use physical source documents in initiate transactions, careful control must be exercised over these instruments. Source document fraud can be used to remove assets from the organization. To control against this type of exposure, implement control procedures over source documents to account for each document. o        Use Pre-numbered Source Documents – source documents should come pre-numbered from the printer with a unique sequential number on each document. This provides an audit trail for tracing transactions through accounting records. o        Use Source Documents in Sequence – source documents should be distributed to the users and used in sequence, requiring the adequate physical security be maintained over the source document inventory at the user site. Access to source documents should be limited to authorized persons. o       Periodically Audit Source Documents – the auditor should compare the numbers of documents used to date with those remaining in inventory plus those voided due to errors. IT Auditing & Assurance, 2e, Hall & Singleton

4 DATA CODING CONTROLS Checks on data integrity during processing
Transcription errors Addition errors, extra digits Truncation errors, digit removed Substitution errors, digit replaced Transposition errors Single transposition: adjacent digits transposed (reversed) Multiple transposition: non-adjacent digits are transposed Control = Check digits Added to code when created (suffix, prefix, embedded) Sum of digits (ones): transcription errors only Modulus 11: different weights per column: transposition and transcription errors Introduces storage and processing inefficiencies Data Coding Controls – coding controls are checks on the integrity of data codes used in processing. Three types of errors can corrupt data codes and cause processing errors: Transcription errors, Single Transposition errors, and Multiple Transposition errors. Transcription errors fall into three classes: Addition errors occur when an extra digit or character is added to the code. Truncation errors occur when a digit or character is removed from the end of a code. Substitution errors are the replacement of one digit in a code with another. Two types of Transposition Errors: Single transposition errors occur when two adjacent digits are reversed. Multiple transposition errors occur when nonadjacent digits are transposed. Check Digits – is a control digit (or digits) added to the code when it is originally assigned that allows the integrity of the code to be established during subsequent processing. The digit can be located anywhere in the code: suffix, prefix, or embedded. This technique will detect only transcription errors. The popular method is modulus 11, which recalculates the check digit during processing. The use of check digits introduces storage and processing inefficiencies and should be restricted to essential data. MODULUS 11: Code = 5372 5*5, 4*3, 3*7, 2*2 = 62 62/11 = 5 with remainder of 7 11 – 7 = 4 [is check digit] Revised Code=53724 IT Auditing & Assurance, 2e, Hall & Singleton

5 BATCH CONTROLS Method for handling high volumes of transaction data – esp. paper-fed IS Controls of batch continues thru all phases of system and all processes (i.e., not JUST an input control) All records in the batch are processed together No records are processed more than once An audit trail is maintained from input to output Requires grouping of similar input transactions Batch Controls – are an effective method of managing high volumes of transaction data through a system. It reconciles output produced by the system with the input originally entered into the system. Controlling the batch continues throughout all phases of the system. It assures that: All records in the batch are processed. No records are processed more than once. An audit trail of transactions in created from input through processing to the output. It requires the grouping of similar types of input transactions together in batches and then controlling the batches throughout data processing. IT Auditing & Assurance, 2e, Hall & Singleton

6 VALIDATION CONTROLS Intended to detect errors in data before processing Most effective if performed close to the source of the transaction Some require referencing a master file Validation Controls – intended to detect errors in transaction data before the data are processed. Most effective when they are performed as close to the source of the transaction as possible. Some validation procedures require making references against the current master file. There are three levels of input validation controls: IT Auditing & Assurance, 2e, Hall & Singleton

7 VALIDATION CONTROLS Field Interrogation Record Interrogation
Missing data checks Numeric-alphabetic data checks Zero-value checks Limit checks Range checks Validity checks Check digit Record Interrogation Reasonableness checks Sign checks Sequence checks File Interrogation Internal label checks (tape) Version checks Expiration date check 1.       Field Interrogation – involves programmed procedures that examine the characteristics of the data in the field. ·         Missing Data Checks – used to examine the contents of a field for the presence of blank spaces. ·         Numeric-Alphabetic Data Checks – determine whether the correct form of data is in a field. ·         Zero-Value Checks – used to verify that certain fields are filled with zeros. ·         Limit Checks – determine if the value in the field exceeds an authorized limit. ·         Range Checks – assign upper and lower limits to acceptable data values. ·         Validity Checks – compare actual values in a field against known acceptable values. ·         Check Digit – identify keystroke errors in key fields by testing the internal validity of the code. 2.       Record Interrogation – procedures validate the entire record by examining the interrelationship of its field values. ·         Reasonable Checks – determine if a value in one field, which has already passed a limit check and a range check, is reasonable when considered along with other data fields in the record. ·         Sign Checks – tests to se if the sign of a field is correct for the type of record being processed. ·         Sequence Checks – determine if a record is out of order. 3.       File Interrogation – purpose is to ensure that the correct file is being processed by the system. ·         Internal Label Checks – verify that the file processed is the one the program is actually calling for. The system matches the file name and serial number in the header label with the program’s file requirements. ·         Version Checks – verify that the version of the file processed is correct. The version check compares the version number of the files being processed with the program’s requirements. ·         Expiration Date Check – prevents a file from being deleted before it expires. IT Auditing & Assurance, 2e, Hall & Singleton

8 INPUT ERROR CORRECTION
Batch – correct and resubmit Controls to make sure errors dealt with completely and accurately Immediate Correction Create an Error File Reverse the effects of partially processed, resubmit corrected records Reinsert corrected records in processing stage where error was detected Reject the Entire Batch Input Error Correction – when errors are detected in a batch, they must be corrected and the records resubmitted for reprocessing. This must be a controlled process to ensure that errors are dealt with completely and correctly. Three common error handling techniques are: 1.       Immediate Correction – when a keystroke error is detected or an illogical relationship, the system should halt the data entry procedure until the user corrects the error. 2.       Create an Error File – individual errors should be flagged to prevent them from being processed. At the end of the validation procedure, the records flagged as errors are removed from the batch and placed in a temporary error holding file until the errors can be investigated. At each validation point, the system automatically adjusts the batch control totals to reflect the removal of the error records from the batch. Errors detected during processing require careful handling. These records may already be partially processed. There are two methods for dealing with this complexity. The first is to reverse the effects of the partially processed transactions and resubmit the corrected records to the data input stage. The second is to reinsert corrected records to the processing stage in which the error was detected. 3.      Reject the Entire Batch – some forms of errors are associated with the entire batch and are not clearly attributable to individual records. The most effective solution in this case is to cease processing and return the entire batch to data control to evaluate, correct, and resubmit. Batch errors are one reason for keeping the size of the batch to a manageable number. IT Auditing & Assurance, 2e, Hall & Singleton

9 GENERALIZED DATA INPUT SYSTEMS (GDIS)
Centralized procedures to manage data input for all transaction processing systems Eliminates need to create redundant routines for each new application Advantages: Improves control by having one common system perform all data validation Ensures each AIS application applies a consistent standard of data validation Improves systems development efficiency Generalized Data Input Systems – to achieve a high degree of control and standardization over input validation procedures, some organizations employ a generalized data input system (GDIS) which includes centralized procedures to manage the data input for all of the organization’s transaction processing systems. A GDIS eliminates the need to recreate redundant routines for each new application. Has 3 advantages: Improves control by having one common system perform all data validation. Ensures that each AIS application applies a consistent standard for data validation. Improves systems development efficiency. IT Auditing & Assurance, 2e, Hall & Singleton

10 CLASSES OF PROCESSING CONTROLS
Run-to-Run Controls Operator Intervention Controls Audit Trail Controls IT Auditing & Assurance, 2e, Hall & Singleton

11 RUN-TO-RUN (BATCH) Use batch figures to monitor the batch as it moves from one process to another Recalculate Control Totals Check Transaction Codes Sequence Checks Run-to-Run Controls – use batch figures to monitor the batch as it moves from one programmed procedure (run) to another. It ensures that each run in the system processes the batch correctly and completely. Specific run-to-run control types are listed below: Recalculate Control Totals – after each major operation in the process and after each run, $ amount fields, hash totals, and record counts are accumulated and compared to the corresponding values stored in the control record. Transaction Codes – the transaction code of each record in the batch is compared to the transaction code contained in the control record, ensuring only the correct type of transaction is being processed. Sequence Checks – the order of the transaction records in the batch is critical to correct and complete processing. The sequence check control compares the sequence of each record in the batch with the previous record to ensure that proper sorting took place. IT Auditing & Assurance, 2e, Hall & Singleton

12 OPERATOR INTERVENTION
When operator manually enters controls into the system Preference is to derive by logic or provided by system Operator intervention increases the potential for human error. Systems that limit operator intervention through operator intervention controls are thus less prone to processing errors. Parameter values and program start points should, to the extent possible, be derived logically or provided to the system through look-up tables IT Auditing & Assurance, 2e, Hall & Singleton

13 AUDIT TRAIL CONTROLS Every transaction becomes traceable from input to output Each processing step is documented Preservation is key to auditability of AIS Transaction logs Log of automatic transactions Listing of automatic transactions Unique transaction identifiers [s/n] Error listing Audit Trail Controls – the preservation of an audit trail is an important objective of process control. Every transaction must be traceable through each stage of processing. Each major operation applied to a transaction should be thoroughly documented. The following are examples of techniques used to preserve audit trails: ·         Transaction Logs – every transaction successfully processed by the system should be recorded on a transaction log. There are two reasons for creating a transaction log: It is a permanent record of transactions. Not all of the records in the validated transaction file may be successfully processed. Some of these records fail tests in the subsequent processing stages. A transaction log should contain only successful transactions. ·         Log of Automatic Transactions – all internally generated transactions must be placed in a transaction log. ·         Listing of Automatic Transactions – the responsible end user should receive a detailed list of all internally generated transactions. ·         Unique Transaction Identifiers – each transaction processed by the system must be uniquely identified with a transaction number. ·         Error Listing – a listing of all error records should go to the appropriate user to support error correction and resubmission. IT Auditing & Assurance, 2e, Hall & Singleton

14 OUTPUT CONTROLS Ensure system output:
Not misplaced Not misdirected Not corrupted Privacy policy not violated Batch systems more susceptible to exposure, require greater controls Controlling Batch Systems Output Many steps from printer to end user Data control clerk check point Unacceptable printing should be shredded Cost/benefit basis for controls Sensitivity of data drives levels of controls Output Controls – ensure that system output is not lost, misdirected, or corrupted and that privacy is not violated. The type of processing method in use influences the choice of controls employed to protect system output. Batch systems are more susceptible to exposure and require a greater degree of control that real-time systems. ·   Controlling Batch Systems Output – Batch systems usually produce output in the form of hard copy, which typically requires the involvement of intermediaries. The output is removed from the printer by the computer operator, separated into sheets and separated from other reports, reviewed for correctness by the data control clerk, and then sent through interoffice mail to the end user. Each stage is a point of potential exposure where the output could be reviewed, stolen, copied, or misdirected. When processing or printing goes wrong and produces output that is unacceptable to the end user, the corrupted or partially damaged reports are often discarded in waste cans. Computer criminals have successfully used such waste to achieve their illicit objectives. Techniques for controlling each phase in the output process are employed on a cost-benefit basis that is determined by the sensitivity of the data in the reports. IT Auditing & Assurance, 2e, Hall & Singleton

15 OUTPUT CONTROLS Output spooling – risks:
Access the output file and change critical data values Access the file and change the number of copies to be printed Make a copy of the output file so illegal output can be generated Destroy the output file before printing take place Output Spooling – applications are often designed to direct their output to a magnetic disk file rather than to the printer directly. The creation of an output file as an intermediate step in the printed process presents an added exposure. A computer criminal may use this opportunity to perform any of the following unauthorized acts: Access the output file and change critical data values. Access the file and change the number of copies to be printed. Make a copy of the output file to produce illegal output reports. Destroy the output file before printed takes place. IT Auditing & Assurance, 2e, Hall & Singleton

16 OUTPUT CONTROLS Bursting Waste Data control Report distribution
Supervision Waste Proper disposal of aborted copies and carbon copies Data control Data control group – verify and log Report distribution Bursting – when output reports are removed from the printer, they go the bursting stage to have their pages separated and collated. The clerk may make an unauthorized copy of the report, remove a page from the report, or read sensitive information. The primary control for this is supervision. Waste – computer output waste represents a potential exposure. Dispose properly of aborted reports and the carbon copies from the multipart paper removed during bursting. Data Control – the data control group is responsible for verifying the accuracy of compute output before it is distributed to the user. The clerk will review the batch control figures for balance, examine the report body for garbled, illegible, and missing data, and record the receipt of the report in data control’s batch control log. Report Distribution – the primary risks associated with report distribution include reports being lost, stolen, or misdirected in transit to the user. To minimize these risks: name and address of the user should be printed on the report, an address file of authorized users should be consulted to identify each recipient of the report, and maintaining adequate access control over the files. The reports may be placed in a secure mailbox to which only the user has the key. The user may be required to appear in person at the distribution center and sign for the report. A security officer or special courier may deliver the report to the user. IT Auditing & Assurance, 2e, Hall & Singleton

17 OUTPUT CONTROLS End user controls Report retention: End user detection
Statutory requirements (gov’t) Number of copies in existence Existence of softcopies (backups) Destroyed in a manner consistent with the sensitivity of its contents End User Controls – output reports should be re-examined for any errors that may have evaded the data control clerk’s review. Errors detected by the user should be reported to the appropriate computer services management. A report should be stored in a secure location until its retention period has expired. Factors influencing the length of time a hard copy report is retained include: Statutory requirements specified by government agencies. The number of copies of the report in existence. The existence of magnetic or optical images of reports that can act as permanent backup. Reports should be destroyed in a manner consistent with the sensitivity of their contents. IT Auditing & Assurance, 2e, Hall & Singleton

18 TESTING COMPUTER APPLICATION CONTROLS
Around the computer Rarely appropriate Through the computer Supported by continuous audit techniques Testing Computer Application Controls – control-testing techniques provide information about the accuracy and completeness of an application’s processes. These test follow two general approaches: Black Box: Testing around the computer White Box: Testing through the computer IT Auditing & Assurance, 2e, Hall & Singleton

19 TESTING COMPUTER APPLICATION AROUND THE COMPUTER
Ignore internal logic of application Use functional characteristics Flowcharts Interview key personnel Advantages: Do not have to remove application from operations to test it Appropriately applied: Simple applications Relative low level of risk Black Box (Around the Computer) Technique – auditors performing black box testing do not rely on a detailed knowledge of the application’s internal logic. They seek to understand the functional characteristics of the application by analyzing flowcharts and interviewing knowledgeable personnel in the client’s organization. The auditor tests the application by reconciling production input transactions processed by the application with output results. The advantage of the black box approach is that the application need not be removed from service and tested directly. This approach is feasible for testing applications that are relatively simple. Complex applications require a more focused testing approach to provide the auditor with evidence of application integrity. IT Auditing & Assurance, 2e, Hall & Singleton

20 TESTING COMPUTER APPLICATION CONTROLS THROUGH THE COMPUTER
Relies on in-depth understanding of the internal logic of the application Uses small volume of carefully crafted, custom test transactions to verify specific aspects of logic and controls Allows auditors to conduct precise test with known outcomes, which can be compared objectively to actual results White Box (Through the Computer) Technique – relies on an in-depth understanding of the internal logic of the application being tested. Several techniques for testing application logic directly are included. This approach uses small numbers of specially created test transactions to verify specific aspects of an application’s logic and controls. Auditors are able to conduct precise tests, with known variables, and obtain results that they can compare against objectively calculated results. IT Auditing & Assurance, 2e, Hall & Singleton

21 COMPUTER AIDED AUDIT TOOLS AND TECHNIQUES (CAATTs)
Test data method Base case system evaluation Tracing Integrated Test Facility [ITF] Parallel simulation GAS Computer Aided Audit Tools and Techniques for Testing Controls – there are 5 CAATT approaches: IT Auditing & Assurance, 2e, Hall & Singleton

22 TEST DATA Used to establish the application processing integrity
Uses a “test deck” Valid data Purposefully selected invalid data Every possible: Input error Logical processes Irregularity Procedures: Predetermined results and expectations Run test deck Compare Test Data Method – used to establish application integrity by processing specially prepared sets of input data through production applications that are under review. The results of each test are compared to predetermined expectations to obtain an objective evaluation of application logic and control effectiveness. Creating Test Data – when creating test data, auditors must prepare a complete set of both valid and invalid transactions. If test data are incomplete, auditors might fail to examine critical branches of application logic and error-checking routines. Test transactions should test every possible input error, logical process, and irregularity. IT Auditing & Assurance, 2e, Hall & Singleton

23 TRACING Test data technique that takes step-by-step walk through application The trace option must be enabled for the application Specific data or types of transactions are created as test data Test data is “traced” through all processing steps of the application, and a listing is produced of all lines of code as executed (variables, results, etc.) Excellent means of debugging a faculty program 3. Tracing – performs an electronic walk-through of the application’s internal logic. Implementing tracing requires a detailed understanding of the application’s internal logic. Tracing involves three steps: ·         The application under review must undergo a special compilation to activate the trace option. ·         Specific transactions or types of transactions are created as test data. ·         The test data transactions are traced through all processing stages of the program, and a listing is produced of all programmed instructions that were executed during the test. IT Auditing & Assurance, 2e, Hall & Singleton

24 TEST DATA: ADVANTAGES AND DISADVANTAGES
Advantages of test data They employ white box approach, thus providing explicit evidence Can be employed with minimal disruption to operations They require minimal computer expertise on the part of the auditors Disadvantages of test data Auditors must rely on IS personnel to obtain a copy of the application for testing Audit evidence is not entirely independent Provides static picture of application integrity Relatively high cost to implement, auditing inefficiency Advantages of Test Data Techniques They employ through the computer testing, thus providing the auditor with explicit evidence concerning application functions. Test data runs can be employed with only minimal disruption to the organization’s operations. They require only minimal computer expertise on the part of auditors. Disadvantages of Test Data Techniques Auditors must rely on computer services personnel to obtain a copy of the application for test purposes. Audit evidence collected by independent means is more reliable than evidence supplied by the client. Provide a static picture of application integrity at a single point in time. They do not provide a convenient means of gathering evidence about ongoing application functionality. Their relatively high cost of implementation, resulting in auditing inefficiency. IT Auditing & Assurance, 2e, Hall & Singleton

25 Continuous Auditing Embedded Audit Module Real and test transactions
Tagged transactions Audit hooks IT Auditing & Assurance, 2e, Hall & Singleton

26 INTEGRATED TEST FACILITY
ITF is an automated technique that allows auditors to test logic and controls during normal operations Set up a dummy entity within the application system System able to discriminate between ITF audit module transactions and routine transactions Auditor analyzes ITF results against expected results Integrated Test Facility – an automated technique that enables the auditor to test an application’s logic and controls during its normal operation. ITF databases contain ‘dummy’ or test master file records integrated with legitimate records. ITF audit modules are designed to discriminate between ITF transactions and routine production data. The auditor analyzes ITF results against expected results. IT Auditing & Assurance, 2e, Hall & Singleton

27 PARALLEL SIMULATION Auditor writes or obtains a copy of the program that simulates key features or processes to be reviewed / tested Auditor gains a thorough understanding of the application under review Auditor identifies those processes and controls critical to the application Auditor creates the simulation using program or Generalized Audit Software (GAS) Auditor runs the simulated program using selected data and files Auditor evaluates results and reconciles differences Out of date approach IT Auditing & Assurance, 2e, Hall & Singleton

28 and IM 28

29 Sedona ConferenceWG1 Best Practices for E Doc Retention and Production
The Sedona Conference exists to allow leading jurists, lawyers, experts, academics and others, at the cutting edge of issues in the area of antitrust law, complex litigation, and intellectual property rights, to come together - in conferences and mini-think tanks (Working Groups) - and engage in true dialogue, not debate, all in an effort to move the law forward in a reasoned and just way. WG1: The development of principles and best practices recommendations for electronic document retention and production.

30 Sedona ESI Framework Sedona Conference - White papers on keyword searches and electronic stored information (ESI) Keyword list can cut costs substantially Most searches turn up small percent of relevant documents and miss many critical documents Risks for both under and over inclusive terms Sedona framework provides higher quality and lower costs

31 Keyword Search and E-Discovery
E-discovery and document review expensive Cost associated with heavy reliance on human review Search solutions were not built with e-discovery in mind Majority of companies do not have an effective retention or archiving plan for electronic documents

32 ESI Retention Policy Must comply with SOX and be scrutinized by legal
Categorize documents by type and retention period Use different archival methods Software can provide for efficient retrieval Train employees to policy There are several questions that need to be answered to address the larger question of “what to keep.” The first is, “what type of documents and what sort of key words or phrases are deemed sensitive?” The second is, “does the company allow documents to be created and saved on local machines, or is everything saved on a central server(s)?” Regarding the first question, there are some obvious answers. For example, words or phrases that have a sexual or racial content would obviously be deemed sensitive--they might prove important in an employment law-related case. To isolate s containing such matter, an filtering program could be customized to search both messages and attachments and save copies of any that contained keywords or phrases deemed sensitive. This would safeguard the organization from relying on end-users to save these messages, and would guarantee that all s are retained in a universal format in a single location. It would also save money and storage space by not archiving every message that passes through the company’s servers. By indexing these messages and attachments, an organization will greatly streamline future data requests--and save significant dollars in the process. An organization might also wish to copy and retain copies of certain file types, depending on the nature of their business. For example, a high tech manufacturer who creates potentially patentable designs might want to retain all Acrobat PDF files or other graphics-oriented documents that might contain design information, should a patent-infringement oriented matter surface. The question of whether documents are to be created and saved on local machines or stored exclusively on a central network server inherently implies the backup and preservation procedures that a good retention policy should implement. If files are created and saved on local machines, an organization can set-up workstations so that duplicate files are centrally backed-up or otherwise saved on central servers. This gives an organization much better control of potential evidence. Otherwise, records managers would need to periodically review the content of each machine, a time-consuming and expensive process. Another thing to consider in crafting a retention policy is whether or not employees are allowed to take notebook computers on the road or home, or to work on company business from a home computer. In the case of notebook systems, synchronization software can be used to update the files on central servers the next time the notebook systems log into the network, so all information is accounted for. Once the files are on the network, forensic search tools can be deployed to identify key files that would fall under the retention policy. They can then be copied and archived according to the procedures established in the policy. The tools one would use depend upon the operating environment and server access. Text Search Pro (published by New Technologies, Inc.) and DTSearch (by DTSearch Corp.) both work well depending on the server configurations and types of data to be searched.

33 E-Mail Retention Policy
Federal Rules of Civil Procedure, industry regulations and internal policies all influence which s should be archived. Safe harbor in eDiscovery rests in an organization adhering to its policies and procedures that guide the destruction of its data. Not all s are the same: Set archive categories by nature of . Adopt a policy and do not vary from it. administration tasks gobble up 43% of IT support costs. In one lawsuit an IT professional divulged that he was storing hundreds of backup tapes in a closet. He had not told his lawyers. Regardless of whether the backups had anything to do with the lawsuit, the opposing lawyers had the right to order that the backups be read. Of course they did that, and the cost ran into the millions of dollars for the company. In another case, Prudential Life Insurance was involved in a class action suit and the court had ordered that it destroy no records during the proceedings. Unfortunately no one told the IT department, who happily went on deleting electronic records on its own retention schedule. A judge issued a $1 million penalty against Prudential for destroying data that supported its opponent's case, and required them to deploy a records management program with a multimillion dollar price tag. Although Prudential had not deliberately destroyed relevant data, it still lost huge sums of money over its inability to enforce a reasonable and consistent retention policy. Federal Rules of Civil Procedure (FRCP), industry regulations and internal policies all influence which s should be archived If your retention of is 90 days then you should adhere to this standard and not let 90 days mean 60 days, or 6 months etc. as inconsistent adherence to retention policies is even worse in the eyes of the court than having a wrong policy in place. Safe harbor in eDiscovery rests in an organization adhering to its policies and procedures that guide the destruction of its data. RIMM- s showed a systematic fraud perpetrated at the expense of the company and its shareholders.

34 Redacted E-mail and Privacy
Deleted information may be recoverable from electronic documents Policy should be specific as to what information must be deleted before issuing to a third party Covered by federal laws and regs Software available to filter and delete Privacy experts point out that similarly sensitive, private and protected information is now in the hands of thousands of private corporations, which often use , the Web and other means to exchange data with third parties. Even files that appear to be safe on the surface, such as those from which certain information has been redacted, can still put a company at risk of violating privacy laws. Workers often delete specific words from a document or block them out by changing font or background colors. In many cases, however, the word processor program will save a copy of the original document, which can later be recovered by a third party. Louis Jurgens, executive vice president at security consultancy Sage Inc. in Amarillo, Texas, said there are several tamper-proof methods of redaction that will work with electronic documents and , and eliminate the possibility of information being recovered after redaction. But, he added, the technical aspects are less important than having a solid policy that complies with regulations. Such a policy spells out exactly what items need to be redacted from a document and who is responsible for scrubbing private data. Cheryl Camin, an attorney in the HIPAA practice team at the Dallas law firm of Gardere Wynne Sewell, said at a minimum, a policy should require that employees "de-identify" all patient records and other sensitive information by removing anything that could identify the person if there is a possibility of it being made public. Several vendors offer products for preventing privacy leaks, such as rule-based filtering for outgoing , a feature that SurfControl in Scotts Valley, Calif. and others offer to help close loopholes. Other solutions include an automatic redaction product from Landsdowne, Penn.-based Appligent called Redax, which is used by the Department of Homeland Security, Kaiser Permanente and others to automatically identify and permanently remove sensitive information. Still, Jurgens said the bottom line is knowing what privacy laws require and

35 Cost of Poor Retention Policy
The judge could … instruct the jury to infer that the record(s) destroyed contained information unfavorable to your company. order your company to pay cost of restoring any archival media on which a lost record is stored plus reasonable litigation expenses incurred by your opponent in filing a motion for discovery and production of the record. In Residential Funding Corp. v. DeGeorge Fin. Corp. 306 F.3d 99, the 2d U.S. Circuit Court sounded a grim warning for companies lacking a sound electronic document retention policy: if you wind up in court and can’t produce the goods, you may be liable! Applying long-standing spoliation doctrine to the electronic era, the Second Circuit held that where a party breaches a discovery obligation by failing to produce evidence, the trial court has broad discretion in fashioning an appropriate sanction, including the discretion to delay the start of a trial, to declare a mistrial, or to issue an adverse inference instruction. Sanctions may be imposed where a party has not only acted in bad faith or gross negligence, but also through ordinary negligence. Residential Funding holds that delay, as well as destruction, is sanctionable. Vacating the trial court’s sanctions order, the 2d Circuit Court reversed and remanded the plaintiff’s favorable $96.4 million jury verdict even though the unproduced evidence-- that resided on old backup tapes--was felt to contain little if any significant material for the defense’s case. Comparable holdings go back at least sixteen years to the decision in National Assoc. of Radiation Survivors v. Turnage, 115 F.R.D. 543 (D.C., N.D., California, 1987).

36 Beware the Unmanaged IM and Email
Recipients may retain IM IM immune to firewalls IM may be offensive to employees Track IM usage Enable content filtering and blocking Log and audit conversations Do not allow encrypted IM Changes to Federal Rules of Civil Procedure, effective December 1, 2006, created a new category for electronic records which may be requested during discovery in legal proceedings. Most countries around the world also regulate the use of electronic messaging and electronic records retention in similar fashion to the United States. The most common regulations related to IM at work involve the need to produce archived business communications to satisfy government or judicial requests under law. Many instant messaging communications fall into the category of business communications that must be archived and retrievable. In addition to the malicious code threat, the use of instant messaging at work also creates a risk of non-compliance to laws and regulations governing the use of electronic communications in businesses. In the United States alone there are over 10,000 laws and regulations related to electronic messaging and records retention.[12] many companies don't know that retention and content policies should apply also to instant messaging, which is, "just turbo-charged . there is a huge misconception out there that IM is not a written business record and that you can say anything you want. "Users think that once you close your window, the message is gone, but that's not true. Even if you're not retaining the message, the person you're chatting with might be. Also, it's an enormous security issue if your employees are transmitting IMs on business issues. These messages are transmitted via the public Internet. They could include customers' social security numbers and important account information." Employers need to find out what the business presence of IM is in their workplace and how it is used. Enable content filtering and blocking. Just as content filtering and blocking help prevent viruses, worms and other malware from infecting the network via , employing these technologies for IM provides similar protection, Verhoeven says. Log and audit IM conversations. This includes searching logs based on keywords, dates, participants, protocols or some combination of these factors. Such logging and auditing should be reviewable by an authorized reviewer as well as the IM user for any specific message. There should also be an defined retention period to store this information, Don't permit use of encryption in IM. If a user's is encrypting IM messages, the monitoring system can't determine if the IM is legitimate or if it's sending out corporate secrets or contains other unauthorized communication 36 36

37 Chapter 7: Computer-Assisted Audit Techniques [CAATs]
IT Auditing & Assurance, 2e, Hall & Singleton IT Auditing & Assurance, 2e, Hall & Singleton


Download ppt "Chapter 7: Computer-Assisted Audit Techniques [CAATs]"

Similar presentations


Ads by Google