Download presentation
Presentation is loading. Please wait.
Published byTobias Stokes Modified over 9 years ago
1
Final Project of Information Retrieval and Extraction by d93921022 吳蕙如
2
Working Environment OS : Linux 7.3 CPU : C800Mhz Memory : 128 MB Tool used : –stopper –stemmer –trec_eval –sqlite Language used : –shell script : control the inverted file indexing procedures –AWK : used for extract needed part from documents –sql : used while trying to adopt the file format database - sqlite.
3
First Indexing Trial 1.FBIS Source Files 2.Documents Separation 18’51” + 55’13” 3.Documents Pass Stemmer 33’52” + 1:00’58” 4.Documents Pass Stopper 33’23” + 1:09’29” 5.Words sort by AWK 44’07” + 1:19’09” 6.Term Frequency Count and Inverted File Indexing (one file per word) > 9hours, never finished While considering about the indexing procedures, the most directly way is doing it step by step. So in the first trial, I did each step and save the result as input of next step. However, as the directory size grew, the time cost to write a file increased out of control. Time cost of index file generating seems unacceptable and was stopped after 9 hours.
4
Second Indexing Trial 1.FBIS Source Files 2.Documents Separation 23’29” + 58’36” 3.Documents Pass Stemmer 30’05” + 1:07’26” 4.Documents Pass Stopper 22’34” + 52’29” 5.Words Sort by AWK 22’44” + 48’27” 6.Words Count and Indexing 1.Two Suffix Directory Separating 5” 2.Word Files Indexing 12:41’00” + break The index generating took too much time. This seemed to be caused by the number of files in a directory. So, I tried to set up 26*26 sub directories basing on the first two characters of each words and separated the index files storage. However, it still took so long, and this trial was stopped while finishing FBIS3 after almost 13 hours.
5
Third Indexing Trial 1.FBIS Source Files 2.Documents Separation 20’15” + 1:09’38” 3.Documents Pass Stemmer 29’25” + 55’42” 4.Documents Pass Stopper and Sort 34’17” + 1:05’48” 5.Words Count and Indexing 1.Suffix Directory Separating 6” 2.Word Files Indexing (break after 11 hours) Well, before finding out a way to solve time consuming problem of indexing, the steps before also cost a lot of time. I tried to combine the steps with pipeline command, but only worked when using system sort command. After using stopper | sort step, at least one hour is saved. Time cost is still far from acceptable.
6
Fourth Indexing Trial 1.FBIS Source Files 33’51” + 1:00’38” 1.Documents Separation 2.Documents Pass Stemmer 3.Documents Pass Stopper and Sort 2.Words Count and Indexing 1.Suffix Directory Separating 2” 2.Word Files Indexing 13:14’23” + 14:15’12” I finally found out the time was mostly cost on searching the location for next writing, which is a space allocation characteristic of linux systems. So, I combined the former steps by doing a run from per source file to the sorted ones. All middle files are removed as soon as used by the next part. The time consuming decreased amazingly. It only cost one- third of time used in last trail. Indexing was finished for the first time after 29 hours.
7
Fifth Indexing Trial 1.For Each FBIS Source File 1:10’26” + 1:19’29” 1.Documents Separation 2.Documents Pass Stemmer 3.Documents Pass Stopper and Sort 4.Words Count and Database Indexing The indexing took just so long and I really want to find a way for decreasing the time cost. A file format database may be a solution. So, I adopt sqlite and write all my index lines as table rows into a file using sqlite. The time cost was immediately down to totally two and half hours, how amazing.
8
Indexing - Level Analysis 1.For Each FBIS Source File 1:08’53” + 1:16’39” v.s. 2:22’57” document count 61578 130417 v.s. 130417 (same) file size 262877184 542937088 v.s. same 1.Documents Separation 2.Documents Pass Stemmer 3.Documents Pass Stopper and Sort 4.Words Count and Database Indexing Since the whole indexing can be done in 2.5 hours, I then tried to count the level influence. I tried to index FBIS3 then FBIS4 separately, then combined them as a set and tried again. The time costs were nearly the same, and the document counts and file sizes were all equaled. This is not at all surprising because of the working procedure did not add any outside information in.
9
Sixth Indexing Trial 1.For Each FBIS Source File 35’49” + 39’47” 33’04” + 35’43” file size 176340992 365469696 1.Documents Separation 2.Documents Pass Stemmer 3.Documents Pass Stopper and Sort 4.Words Count and Write in Single Indexing File While revisiting the fourth and fifth trial, I figured out maybe the problem is the number of index files. So I tried to write all the indexing message into a single file. Two sub part were tried : –Write after counting term frequency of each word. –Append after compute all frequency of a document.
10
Seventh Indexing Trial 1.For Each FBIS Source File 44’38” + 50’32” file number 646 655 total file size 178606080 367759360 1.Documents Separation 2.Documents Pass Stemmer 3.Documents Pass Stopper and Sort 4.Words Count and write into 26*26 Indexing File When consider about query and indexing, single index file is just to large and would cost a long time to search for wanted terms. So, I modified the final step and write the index lines into different files based on the word suffix.
11
Indexing Time indexingFBIS 3FBIS 4total trial 1 18’51”+33’52”+33’23” + 44’07”+ ? >> 2:10’13” 55’13” + 1:00’58” + 1:09’29” + 1:19’09” +? >> 4:24’49” >> 6:35’02 trial 2 23’29”+30’05”+22’34” + 22’44” +5” + 12:41’00” = 14:19’57” 58’36” + 1:07’26”+52’29” + 48’27” +? >> 3:46’58” >> 18:06’55” trial 3 20’15”+29’25”+34’17”+6” +? >> 1:24’03” 1:09’38” + 55’42” + 1:05’48” +? >> 3:11’08” >> 4:35’11” trial 4 33’51”+13:14’23” = 13:48’14”1:00’38” + 14:15’12” = 15:15’50” 29:04’04” trial 51:10’26”1:19’29”2:29’55” trial 6-135’49”39’47”1:15’36” trial 6-233‘04“ 35’49“1:08’47“ trial 744’38“50’32“1:35’10“
12
First Topic Query 1.Extract Topics from Source Files and Pass Stemmer and Stopper 1” 2.Select Per Keyword Data from Index Database or Index file 3.Weight Computing 4.Ranking and Filtering 5.Evaluation Five query topics, totally 15 keywords Total time to query : –Index database : 13’38” 31’27 –Single index file : 9’00” 18’39” –Separated index file : 2’ 04” Seems not efficient enough. If exam several terms together, more time should be saved.
13
Second Topic Query 1.Extract Topics from Source Files and Pass Stemmer and Stopper 2.Generate One Query Strings for each topic 3.Select Data from Index Database or Index File 4.Weight Computing 5.Ranking and Filtering 6.Evaluation Total time to query : –Index database : 2’30” 5’19” –Single index file : 2’26” 4’55” –Separated index file : not much progress expected, for the queried file need to be checked separately. But, as query terms increase, using separated index file would save a lot more search time.
14
Updated Topic Query 1.Extract Topics from Source Files and Pass Stemmer and Stopper 2.Generate Query Strings based on frequency of each term 3.Select Data from Index Database or Index File 4.Weight Computing 5.Ranking and Filtering 6.Evaluation Some of the terms in the topics seem to get far too much return documents and seem not work at all. Check the document frequency of each terms and removed the high frequency (>10%) terms. Did not work, some more related terms need to be used for better precision.
15
Frequency Term Query 1.Select Some Terms based on Descriptions, Narratives and web queries for each topic 2.Order these terms based on document frequency of each word 3.Deciding the Number of Terms to Use and Generate Query Strings 4.The Following Steps are same as before Number of terms are tried from five to 100. The precision increase only in the beginning of adding terms. While the query time raise proportionally as the query terms increase. Terms of high frequency were removed, threshold were 10% and 20%. More strict frequency limit (10%) seem to help.
16
Query : Topic
17
Query : Updated Topic
18
Query : Terms
19
Query Time query term topic 510152030406080100 db FBIS 3 304471100126180234347462582 db FBIS 3+4 63941472022583724847219531197 file FBIS 3 436790117143192245349476594 file FBIS 3+4 891351882433434045107229861232
20
Conclusion As I examined the index file and term frequency I generated. I found that there are so many terms seem to be useless. They may be meaningless, like “aaaf”, or wrong spelling, like “internacion”. Some terms have frequency count less than three. If these terms are removed, the query would be doing even faster, I suppose. I could have spent more time to sort and index the inverted file. However, when I tried part of this, the time consuming made me consider about if it is worthwhile. Maybe just a recent query cache is better than a full sort process. Well, this makes the end of my project report.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.