Chap6. Organizing Files for Performance

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

Indexing.
Chapter 6: Memory Management
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
February 1 & 31 Csci 2111: Data and File Structures Week4, Lectures 1 & 2 Organizing Files for Performance.
Comp 335 File Structures Reclaiming and Reusing File Space Techniques for File Maintenance.
Folk/Zoellick/Riccardi, File Structures 1 Objectives: To get familiar with: Data compression Storage management Internal sorting and binary search Chapter.
February 1 & 31 Files Organizing Files for Performance.
File Processing - Organizing file for Performance MVNC1 Organizing Files for Performance Chapter 6 Jim Skon.
©Silberschatz, Korth and Sudarshan12.1Database System Concepts Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree.
Comp 335 File Structures Indexes. The Search for Information When searching for information, the information desired is usually associated with a key.
Allocating Memory.
Chap6. Organizing Files for Performance. Chapter Objectives(1)  Look at several approaches to data compression  Look at storage compaction as a simple.
CPSC 231 Organizing Files for Performance (D.H.) 1 LEARNING OBJECTIVES Data compression. Reclaiming space in files. Compaction. Searching. Sorting, Keysorting.
FALL 2004CENG 351 Data Management and File Structures1 External Sorting Reference: Chapter 8.
A Data Compression Algorithm: Huffman Compression
FALL 2006CENG 351 Data Management and File Structures1 External Sorting.
Recap of Feb 27: Disk-Block Access and Buffer Management Major concepts in Disk-Block Access covered: –Disk-arm Scheduling –Non-volatile write buffers.
1 File Structure n File as a stream of characters l No structure l Consider students registered in a course Joe SmithSC Kathy LeeEN Albert.
Organizing files for performance Chapter Data compression Advantages of reduced file size Redundancy reduction: state code example Repeating sequences:
Chapter 7 Indexing Objectives: To get familiar with: Indexing
Folk/Zoellick/Riccardi, File Structures 1 Objectives: To get familiar with: Data compression Storage management Internal sorting and binary search Chapter.
File StructuresSNU-OOPSLA Lab.1 Chap6. Organizing Files for Performance 서울대학교 컴퓨터공학부 객체지향시스템연구실 SNU-OOPSLA-LAB 교수 김 형 주 File structures by Folk, Zoellick.
: Chapter 12: Image Compression 1 Montri Karnjanadecha ac.th/~montri Image Processing.
CMPT 454, Simon Fraser University, Fall 2009, Martin Ester 75 Database Systems II Record Organization.
Image Compression (Chapter 8) CSC 446 Lecturer: Nada ALZaben.
CS4432: Database Systems II Record Representation 1.
Evidence from Content INST 734 Module 2 Doug Oard.
Memory Management -Memory allocation -Garbage collection.
Comp 335 File Structures Data Compression. Why Study Data Compression? Conserves storage space Files can be transmitted faster because there are less.
Course Code #IDCGRF001-A 5.1: Searching and sorting concepts Programming Techniques.
CHAPTER 51 LINKED LISTS. Introduction link list is a linear array collection of data elements called nodes, where the linear order is given by means of.
Storage and File Organization
CSCE 210 Data Structures and Algorithms
Module 11: File Structure
CPSC 231 Organizing Files for Performance (D.H.)
Subject Name: File Structures
CHP - 9 File Structures.
Fundamental File Structure Concepts
Lecture 7 Data Compression
Data Compression.
Unit -3 Preeti Deshmukh.
CS522 Advanced database Systems
Chapter 11: Storage and File Structure
Chapter 11: File System Implementation
Operating Systems (CS 340 D)
Chapter 9 – Real Memory Organization and Management
Data Structures Interview / VIVA Questions and Answers
Disk Storage, Basic File Structures, and Hashing
Subject Name: File Structures
Data Compression CS 147 Minh Nguyen.
Main Memory Management
Chapter 11: File System Implementation
Disk Storage, Basic File Structures, and Buffer Management
CS Introduction to Operating Systems
Topics Introduction to File Input and Output
Chapter 11: File System Implementation
(Slides by Hector Garcia-Molina,
B-Trees CSE 373 Data Structures CSE AU B-Trees.
Chapter 11 Data Compression
Files Management – The interfacing
Hashing Sections 10.2 – 10.3 Lecture 26 CS302 Data Structures
File Organization.
Chapter 11: File System Implementation
CENG 351 Data Management and File Structures
Hash-Based Indexes Chapter 11
Topics Introduction to File Input and Output
Lecture 20: Representing Data Elements
Index Structures Chapter 13 of GUW September 16, 2019
Presentation transcript:

Chap6. Organizing Files for Performance

Chapter Objectives(1) Look at several approaches to data compression Look at storage compaction as a simple way of reusing space in a file Develop a procedure for deleting fixed-length records that allows vacated file space to be reused dynamically Illustrate the use of linked lists and stacks to manage an avail list Consider several approaches to the problem of deleting variable-length records Introduce the concepts associated with the terms internal fragmentation and external fragmentation

Chapter Objectives(2) Outline some placement strategies associated with the reuse of space in a variable-length record file Provide an introduction to the idea underlying a binary search Undertake an examination of the limitations of binary searching Develop a keysort procedure for sorting larger files; investigate the costs associated with keysort Introduce the concept of a pinned record

Contents 6.1 Data compression 6.2 Reclaiming space in files 6.3 Finding things quickly: An Introduction to internal sorting and binary searching 6.4 Keysorting

Data Compression(1) Reasons for data compression less storage transmitting faster, decreasing access time processing faster sequentially

Data Compression(2) :Using a different notation Fixed-Length fields are good candidates Decrease the # of bits by finding a more compact notation ex) original state field notation is 16bits, but we can encode with 6bit notation because of the # of all states are 50 Cons. unreadable by human cost in encoding time decoding modules => increase the complexity of s/w => used for particular application

Data Compression(3) :Suppressing repeating sequences Run-length encoding algorithm read through pixels, copying pixel values to file in sequence, except the same pixel value occurs more than once in succession when the same value occurs more than once in succession, substitute the following three bytes special run-length code indicator((ex) ff) pixel value repeated the number of times that value is repeated ex) 22 23 24 24 24 24 24 24 24 25 26 26 26 26 26 26 25 24 22 23 ff 24 07 25 ff 26 06 25 24

Data Compression(3) :Suppressing repeating sequences Run-length encoding (cont’d) example of redundancy reduction cons. not guarantee any particular amount of space savings under some circumstances, compressed image is larger than original image Why? Can you prevent this?

Data Compression(4) :Assigning variable-length codes Morse code: oldest & most common scheme of variable-length code Some values occur more frequently than others that value should take the least amount of space Huffman coding base on probability of occurrence determine probabilities of each value occurring build binary tree with search path for each value more frequently occurring values are given shorter search paths in tree

Data Compression(5) :Assigning variable-length codes Huffman coding Letter: a b c d e f g Prob: 0.4 0.1 0.1 0.1 0.1 0.1 0.1 Code: 1 010 011 0000 0001 0010 0011 ex) the string “abde” 101000000001

Huffman Tree a(1) b(010) c(011) d(0000) e(0001) f(0010) g(0011) 00 01 a(1) 00 01 b(010) c(011) 000 001 d(0000) e(0001) f(0010) g(0011)

Data Compression(6) :Irreversible compression techniques Some information can be sacrificed Less common in data files Shrinking raster image 400-by-400 pixels to 100-by-100 pixels 1 pixel for every 16 pixels Speech compression voice coding (the lost information is of no little or no value)

Compression in UNIX System V pack & unpack use Huffman codes after compress file, appends “.z” to end of packed file Berkeley UNIX compress & uncompress use Lempel-Ziv method after compress file, appends “.Z” to end of compressed file

Record Deletion and Storage Compaction record deletion : just marks each deleted record reclamation of all deleted records

Deleting Fixed-length Records for Reclaiming Space Dynamically(1) Reuse the space from deleted records as soon as possible deleted records must be marked in special way we could find the deleted space To make record reuse quickly, we need a way to know immediately if there are empty slots in the file a way to jump directly to one of those slots if they exist => Linked lists or Stacks for avail list * avail list : a list that is made up of deleted records

Deleting Fixed-length Records for Reclaiming Space Dynamically(2) Linked List Stack

Deleting Fixed-length Records for Reclaiming Space Dynamically(3) Linking and stacking deleted records arranging and rearranging links are used to make one available record slot point to the next second field of deleted record points to next record

Sample file showing linked list of deleted records  레코드 3 삭제  레코드 5 삭제  레코드 1 삭제  세 개의 새로운 레코드 삽입 Edwards... Bates... Wills... *-1 Maters... Browns... Chavez 1 2 3 4 5 6 List head (first available record) => 3 Edwards... Edwards... Bates... Wills... *-1 Maters... *3 Chavez 1 2 3 4 5 6 List head (first available record) => 5 Edwards... *5 Wills... *-1 Maters... *3 Chavez 1 2 3 4 5 6 List head (first available record) => 1 Edwards... 1st new rec.. Wills... 3rd new rec.. Maters... 2nd new rec.. Chavez 1 2 3 4 5 6 List head (first available record) => -1

Deleting Variable-length Records Avail list of variable-length records it has byte count of record at beginning of each record use byte offset instead of RRN Adding and removing records in adding records, search through avail list for right size (=>big enough)

Removal of a record from an avail list with variable-length records Size47 Size 38 Size 72 Size 68 -1 (a)Before removal Size 47 Size 38 Size 68 New Link -1 (b)After removal Size 72 Removed record

Storage Fragmentation Internal fragmentation (in fixed-length record) waste space within a record in variable-length records, minimize wasted space by doing away with internal fragmentation External fragmentation (in variable-length record) unused space outside or between individual records three possible solutions storage compaction coalescing the holes: a single, larger record slot minimizing fragmentation by adopting placement strategy

Internal Fragmentation in Fixed-length Records Unused space -> Internal fragmentation Ames | John | 123 Maple | Stillwater | OK | 740751 |................................... Morrison | Sebastian | 9035 South Hillcrest | Forest Village | OK | 74820 | Brown | Martha | 625 Kimbark | Des Moines | IA | 50311 | ......................... 64-byte fixed-length records

External Fragmentation in Variable-length Records 40 Ames | Jone | 123 Maple | Stillwater | OK | 740751 | 64 Morrison | Sebastian | 9035 South Hillcrest | Forest Village | OK | 74820 | 45 Brown | Martha | 625 Kimb bark | Des Moines | IA | 50311 | record length Record[3] External fragmentation ex) Delete Record[2] and Insert New Record[i] : 12-byte unused space 52 Adams | Kits | 3301 Washington D.C | Forest Village | IA | 43563 | Record[i]

Placement Strategies First-fit select the first available record slot suitable when lost space is due to internal fragmentation Best-fit select the available record slot closest in size avail list in ascending order Worst-fit select the largest record slot avail list in descending order suitable when lost space is due to external fragmentation

Finding Things Quickly(1) Goal: Minimize the number of disk accesses Finding things in simple field and record files may have many seeks Binary search algorithm for fixed-sized record int BinarySearch(FixedRecordFile &file, RecType &obj, KeyType &key) // binary search for key. { int low = 0; int high = file.NumRecs() - 1; while (low <= high){ int guess = (high - low)/2; file.ReadByRRN(obj, guess); if(obj.Key () == key) return 1; // record found if(obj.Key() < key) high = guess - 1; // search before guess else low = guess + 1; // search after guess } return 0; // loop ended without finding key

Classes and Methods for Binary Search Class KeyType {public int operator == (KeyType &); int operator < (KeyType &); }; class RecType {public: KeyType Key();}; class FixedRecordFile{public: int NumRecs(); int ReadByRRN (RecType & Record, int RRN);

Finding Things Quickly(2) Binary search vs. Sequential search binary search O(log n) list is sorted by key sequential search O(n)

Finding Things Quickly(3) Sorting a disk file in RAM read the entire file from disk to memory use internal sort (=sort in memory) UNIX sort utility uses internal sort Limitations of binary search & internal sort binary search requires more than one or two access c.f.) single access by RRN keeping a file sorted is very expensive an internal sort works only on small files

Internal Sort disk memory Read the entire file Sort in memory unsorted

Key Sorting & Its Limitations So called, “tag sort” : sorted thing is “key” only Sorting procedure Read only the keys into memory Sort the keys Rearrange the records in file by the sorted keys Advantage less RAM than internal sort Disadvantages(=Limitations) reading records in disk twice is required a lot of seeking for records for constructing a new(sorted) file

1 2 k 3 3 k In RAM On secondary storage 1 2 KEYNODES array Records KEY RRN HARRISON 1 2 Harrison|Susan|387 Eastern.... KELLOG Kellog|Bill|17 Maple.... HARRIS 3 Harris|Margaret|4343 West.... Conceptual view before sorting . BELL k Bell|Robert|8912 Hill.... In RAM On secondary storage Records KEY RRN BELL k 3 Harrison|Susan|387 Eastern.... HARRIS Kellog|Bill|17 Maple.... Conceptual view after sorting keys in RAM HARRISON 1 Harris|Margaret|4343 West.... . KELLOG 2 Bell|Robert|8912 Hill....

Pseudocode for keysort(1) Program: keysort open input file as IN_FILE create output file as OUT_FILE read header record from IN_FILE and write a copy to OUT_FILE REC_COUNT := record count from header record /* read in records; set up KEYNODES array */ for i := 1 to REC_COUNT read record from IN_FILE into BUFFER extract canonical key and place it in KEYNODES[i].KEY KEYNODES[i].KEY = i (continued....)

Pseudocode for keysort(2) /* sort KEYNODES[].KEY, thereby ordering RRNs correspondingly */ sort(KEYNODES, REC_COUNT) /* read in records according to sorted order, and write them out in this order */ for i := 1 to REC_COUNT seek in IN_FILE to record with RRN of KEYNODES[I].RRN write BUFFER contents to OUT_FILE close IN_FILE and OUT_FILE end PROGRAM

Two Solutions :why bother to write the file back? Write out sorted KEYNODES[] array without writing records back in sorted order KEYNODES[] array is used as index file

Relationship between the index file and the data file Records KEY RRN BELL k 3 Harrison|Susan|387 Eastern.... HARRIS Kellog|Bill|17 Maple.... HARRISON 1 Harris|Margaret|4343 West.... . KELLOG 2 Bell|Robert|8912 Hill.... Index file Original file

Pinned records(1) Records that are referenced to physical location of themselves by other records Not free to alter physical location of records for avoiding dangling references Pinned records make sorting more difficult and sometimes impossible solution: use index file, while keeping actual data file in original order

Pinned records(2) File with pinned records delete pinned record Record(i) dangling pointer Record (i+1) Pinned Record delete pinned record Pinned Record File with pinned records

Let’s Review !!! 6.1 Data compression 6.2 Reclaiming space in files 6.3 Finding things quickly: An Introduction to internal sorting and binary searching 6.4 Keysorting