Download presentation
Presentation is loading. Please wait.
Published byAnna Nicholson Modified over 8 years ago
1
UC Berkeley CS294-9 Fall 200012- 1 Document Image Analysis Lecture 12: Word Segmentation Richard J. Fateman Henry S. Baird University of California – Berkeley Xerox Palo Alto Research Center
2
UC Berkeley CS294-9 Fall 200012- 2 The course, recently…. We studied symbol recognition, classifiers and their combinations Word recognition as distinct from characters
3
UC Berkeley CS294-9 Fall 200012- 3 A good segmentation method (or several) is handy We cannot rely on a lexicon to have all words (names, proper nouns, numbers, acronyms) Insisting that words be in the lexicon does not mean they are correct. Powerpoint tries to refuse misspell as mispell since the latter is not in the dictionary! Good segmentation means that the symbol based recognition has a better chance of success
4
UC Berkeley CS294-9 Fall 200012- 4 Segmentation/ Naïve or clever Numerous papers on the subject Some without strong models (e.g. cut at thin parts) Some with exhaustive search / template matching Some with learning/ internal comparisons
5
UC Berkeley CS294-9 Fall 200012- 5 Naïve connected component analysis can’t come close… Characters like “ij:; Ξ â % are separated Ligatures are not separated: ffl, ŒÆœ ffi Vertical cuts between touching characters will not ordinarily work for italics THIS IS ULTRA CONDENSED..TZ this is times italic. (other problems: X 2, )
6
UC Berkeley CS294-9 Fall 200012- 6 Papers of interest on segmentation Tsujimoto and Asada Bayer and Kressel Tao Hong’s (1995) PhD on Degraded Text Recognition
7
UC Berkeley CS294-9 Fall 200012- 7 Segmentation + Clustering (Tao Hong)
8
UC Berkeley CS294-9 Fall 200012- 8 Can lead to decoding!
9
UC Berkeley CS294-9 Fall 200012- 9 Sometimes the image itself holds a key to decoding…
10
UC Berkeley CS294-9 Fall 200012- 10 Visual inter-word relations
11
UC Berkeley CS294-9 Fall 200012- 11 An example text block showing visual inter-word relationships
12
UC Berkeley CS294-9 Fall 200012- 12 Pattern matching can lead to identifying a segment
13
UC Berkeley CS294-9 Fall 200012- 13
14
UC Berkeley CS294-9 Fall 200012- 14 Where this fits…
15
UC Berkeley CS294-9 Fall 200012- 15 Example
16
UC Berkeley CS294-9 Fall 200012- 16 Tsujimoto & Asada: Overview
17
UC Berkeley CS294-9 Fall 200012- 17 Resolve the touching characters: New metric for finding breaks (find plausible breaks Use knowledge about “the usual suspects” rn/m k/lc d/cl … (limits search substantially)
18
UC Berkeley CS294-9 Fall 200012- 18 Metric, pre-processing ANDing columns for profile removing slant from italics
19
UC Berkeley CS294-9 Fall 200012- 19 Choosing break candidates
20
UC Berkeley CS294-9 Fall 200012- 20 Decision Tree for “The”
21
UC Berkeley CS294-9 Fall 200012- 21 Tree search Depth first, looking for solution to the string matching, in sequence. Some partitions are penalized (but not eliminated) if the segmentation point is uncertain. Segments are matched to omnifont templates (“multiple similarity method..”)
22
UC Berkeley CS294-9 Fall 200012- 22 Reexamined explanations mrn qcj klc B13 HI-I mmnun ckdc Etc… 30 confusions This might be mistaken for This
23
UC Berkeley CS294-9 Fall 200012- 23 Some tough calls…
24
UC Berkeley CS294-9 Fall 200012- 24 Unbelievable accuracy…
25
UC Berkeley CS294-9 Fall 200012- 25 A different, perhaps more general method (Bayer, Kressel) Goal: find the column position(s) at which characters are touching –Treat as a systematic classification problem –Learn from a data base containing labelled merged characters Collect real life data; get human breakpoints [or could be synthetic, I suppose] Find appropriate feature set Learn the features of touching characters –Hypothesize column breaks –Application: postal addresses, other stuff too
26
UC Berkeley CS294-9 Fall 200012- 26 Database of touching chars ….2158 patterns
27
UC Berkeley CS294-9 Fall 200012- 27 Big idea Rather than represent the breaks as low points in the projection profile, represent the breaks in the natural context of touching characters by actual example, suitably normalized for size (15-30 pixels high). These locations are manually marked.
28
UC Berkeley CS294-9 Fall 200012- 28 Local feature set describing cut locations / measures of similarity Number of black pixels (= projection profile!) Number of white pixels counting from top/bottom Number of white-black transitions Number of identical b or w pixels next to this column (derivative of pp?)
29
UC Berkeley CS294-9 Fall 200012- 29 Global feature set describing cut locations / measures of similarity Width to height ratio of full image (wider suggests touching characters) Width to height ratio of the image AFTER cutting(s) Number of white-black transitions Number of identical b or w pixels next to this column (derivative of pp?)
30
UC Berkeley CS294-9 Fall 200012- 30 Illustration of the strategy
31
UC Berkeley CS294-9 Fall 200012- 31 How accurate, how fast? (cut location) Finding cuts: 7.8% error in learning set, 7.2%(!) on test set 22% of the no-cut regions had errors Best results used 50-feature classifier using 9 column width Cost for one image cut-analysis one character analysis Validates statistics > heuristics..
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.