Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lucene Open Source Search Engine. Lucene - Overview Complete search engine in a Java library Stand-alone only, no server – But can use SOLR Handles indexing.

Similar presentations


Presentation on theme: "Lucene Open Source Search Engine. Lucene - Overview Complete search engine in a Java library Stand-alone only, no server – But can use SOLR Handles indexing."— Presentation transcript:

1 Lucene Open Source Search Engine

2 Lucene - Overview Complete search engine in a Java library Stand-alone only, no server – But can use SOLR Handles indexing and query Fully featured – but not 100% complete Customizable – to an extent Fully open source Current version: 3.6.1

3 Lucene Implementations LinkedIn – OS software on integer list compression Eclipse IDE – For searching documentation Jira Twitter Comcast – XfinityTV.com, some set top boxes Care.com MusicBrainz Apple, Disney BobDylan.com

4 Indexing Lucene

5 Lucene - Indexing Directory = A reference to an Index – RAMDirectory, SimpleFSDirectory IndexWriter = Writes to the index, options: – Limited or unlimited field lengths – Auto commit – Analyzer ( how to do text processing, more on this later ) – Deletion Policy ( only for deleting old temporary data ) Document – Holds fields to index Field – A name/value pair + index/store flags

6 Lucene – Indexer Outline SimpleFSDirectory fsDir = new SimpleFSDirectory(File) IndexWriter iWriter = new IndexWriter(fsDir,…) Loop: fetch text for each document { Document doc = new Document(); doc.add(new Field(…)); // for each field iWriter.addDocument(doc); } iWriter.commit(); iWriter.close(); fsDir.close();

7 Class Materials SharePoint link – use “search\[flast]” username – sharepoint.searchtechnologies.com – Annual Kickoff – Shared Documents – FY2013 Presentations – Introduction to Lucene lucene-training-src-FY2013.zip

8 Lucene – Index – Exercise 1 Create A new Maven Project – mvn archetype:generate - DgroupId=com.searchtechnologies - DartifactId=lucene-training - DarchetypeArtifactId=maven-archetype- quickstart -DinteractiveMode=false – Right click pom.xml, Maven.. Add Dependency lucene-core in search box Choose 3.6.1 Expand Maven Dependencies.. Right click lucene-core.. Maven download sources – Source code level = 1.6 Copy Source File: LuceneIndexExercise.java – Into com.searchtechnologies package Copy data directory to your project Follow instructions in the file

9 Query Lucene

10 Lucene - Query Directory = An index reference IndexReader = Reads the index, typically associated with reading document fields – readOnly IndexSearcher = Searches the Index QueryParser – Parses a string to a Query – QueryParser = Standard Lucene Parser – Constructor: Version, default field, analyzer Query – Query expression to execute – Returned by qParser.parse(String) – Search Tech’s QPL can generate Query objects

11 Lucene – Query part 2 Executing a Search – TopDocs td = iSearcher.search(, ) TopDocs – Holds statistics on the search plus the top N documents – totalHits, scoreDocs[], maxScore ScoreDoc –Information on a single document – Doc ID and score Use IndexReader to fetch any Document from a Doc ID – (includes all fields for the document)

12 Lucene – Search Outline SimpleFSDirectory fsDir = new SimpleFSDirectory(File f) IndexReader iReader = new IndexReader(fsDir,…) IndexSearcher iSearcher = new IndexSearcher(iReader) StandardAnalyzer sa = new StandardAnalyzer(…) QueryParser qParser = new QueryParser(…) Loop: fetch a query from the user { Query q = qParser.parse( ) TopDocs tds = iSearcher.search(q, 10); Loop: For every document in tds.scoreDocs { Document doc = iReader.document(tds.scoreDocs[i].doc); Print: tds.scoreDocs[i].score, doc.get(“field”) } // Close the StandardAnalyzer, iSearcher, and iReader

13 Lucene – Query – Exercise 2 Open Source File: LuceneQueryExercise.java Follow instructions in the file

14 Relevancy Tuning Lucene

15 Lucene Extras – Fun Things You Can Do iWriter.updateDocument(Term, Document) – Updates a document which contains the “Term” – “Term” in this case is a field/value pair Such as “id” = “3768169” doc.boost( ) – Multiplies term weights in the doc by boost value – Part of “fieldNorm” when you do an “explain” field.boost( ) – Multiplies term weights in field by boost value

16 Explain - Example iSearcher.explain(query, doc-number) Query: star OR catch^0.6 for document 903 1.2778056 = (MATCH) product of: 2.5556111 = (MATCH) sum of: 2.5556111 = (MATCH) weight(title:catch^0.6 in 903), product of: 0.56637216 = queryWeight(title:catch^0.6), product of: 0.6 = boost 7.2195954 = idf(docFreq=1, maxDocs=1005) 0.13074881 = queryNorm 4.512247 = (MATCH) fieldWeight(title:catch in 903), product of: 1.0 = tf(termFreq(title:catch)=1) 7.2195954 = idf(docFreq=1, maxDocs=1005) 0.625 = fieldNorm(field=title, doc=903) 0.5 = coord(1/2)

17 Lucene – Query– Exercise 3 Add explain to your query program Explanation exp = iSearcher.explain(... ) Call it for all documents produced by your search Simply use toString() on the result of explain() to display the results

18 Boosting – Other Issues Similarity Class Javadoc Documentation – Very useful discussion of boosting formulas Similarity.encodeNormValue() – 8-bit floating point! 0.00 => 0 0.10 => 6E 0.20 => 72 0.30 => 74 0.40 => 76 0.50 => 78 0.60 => 78 0.70 => 79 0.80 => 7A 0.90 => 7B 1.00 => 7C 1.10 => 7C 1.20 => 7C 1.30 => 7D 1.40 => 7D 1.50 => 7E 1.60 => 7E 1.70 => 7E 1.80 => 7F 1.90 => 7F 2.00 => 80 2.10 => 80 2.20 => 80 2.30 => 80 2.40 => 80 2.50 => 80 2.60 => 81 2.70 => 81 2.80 => 81 2.90 => 81 3.00 => 81 3.10 => 82 3.20 => 82 3.30 => 82 3.40 => 82 3.50 => 82 3.60 => 83 3.70 => 83 3.80 => 83 3.90 => 83 4.00 => 83 4.10 => 84 4.20 => 84 4.30 => 84 4.40 => 84 4.50 => 84 4.60 => 84 4.70 => 84 4.80 => 84 4.90 => 84 5.00 => 84

19 Lucene Query Objects Query objects are used to execute the search Query Parser Query String iSearcher. search() Top Docs All Derived from the Lucene Query class

20 Lucene Query Objects - Example (george AND washington) OR (thomas AND jefferson) BooleanQuery (clauses = SHOULD) BooleanQuery (clauses = MUST) TermQuery george TermQuery washington TermQuery thomas TermQuery jefferson BooleanQuery (clauses = MUST)

21 Lucene BooleanQuery george +washington -martha jefferson -thomas +sally WORKS LIKE AND: BooleanQuery bq = new BooleanQuery(); bq.add( X, Occur.MUST); bq.add( Y, Occur.MUST); WORKS LIKE OR: BooleanQuery bq = new BooleanQuery(); bq.add( X, Occur.SHOULD); bq.add( Y, Occur.SHOULD); WORKS LIKE: X AND (X OR Y) BooleanQuery bq = new BooleanQuery(); bq.add( X, Occur.MUST); bq.add( Y, Occur.SHOULD);

22 Lucene – Query– Exercise 4 Create BooleanQuery and TermQuery objects as necessary to create a query without the query parser Goal: (star AND wars) OR (blazing AND saddles) TermQuery: tq = new TermQuery(new Term("field","token")) BooleanQuery: BooleanQuery bq = new BooleanQuery(); bq.add(, Occur.MUST); Occur – Occur.MUST, Occur.SHOULD, Occur.MUST_NOT TermQuery and BooleanQuery derive from Query – Any “Query” objects can be passed to iSearcher.search()

23 Lucene Proximity Queries “Spanning” Queries  Return matching “spans” DOCUMENT: Four score and seven years ago, our forefathers brought forth… Query: Returns: four before/5 seven0:4 (four before/5 seven) before forefathers0:8 brought near/3 ago5:9 (four adj score) or (brought adj forth)0:2, 8:10 012345678910 Word positions mark word boundaries

24 Proximity Queries : Available Operators (standard) SpanTermQuery – For terms inside spanning queries (standard) SpanNearQuery – Inorder flag  handles both near and before (standard) SpanOrQuery (standard) SpanMultiTermQueryWrapper – fka SpanRegexQuery (Search Tech) SpanAndQuery (SearchTech) SpanBetweenQuery – between(start,end,positive-content,not-content)

25 Span Queries demo of LuceneSpanDemo.java

26 Analysis Lucene

27 Analyzers “Analysis” = “Text Processing” in Lucene Includes: – Tokenization Since 1955, the B-52…  Since, 1955, the, B, 52 – Token filtering Splitting, joining, replacing, filtering, etc. Since, 1955, the, B, 52  1955, B, 52 George, Lincoln  george, lincoln MasteringBiology  Mastering, Biology B-52  B52, B-52, B, 52 – Stemming tables  table carried  carry

28 Analyzer Analyzer, Tokenizer, TokenFilter Tokenizer: Text  TokenStream TokenFilter: TokenStream  TokenStream Analyzer: A complete text processing function (one tokenizer + multiple token filters) – Manufactures TokenStreams TokenizerTokenFilter... string

29 Existing Analyzers, Tokenizers, Filters Tokenizer – (Standard) CharTokenizer, WhitespaceTokenizer, KeywordTokenizer, ChineseTokenizer, CJKTokenizer, StandardTokenizer, WikipediaTokenizer (more) – (Search Tech) UscodeTokenizer (produces each HTML as a separate token) TokenFilter – Stemmers: (Standard) many language-specific stemmers, PorterStemFilter, SnoballFilter – Stemmers: (Search Tech) Lemmatizer

30 Existing Analyzers, Tokenizers, Filters TokenFilters (continued) – LengthFilter, LowerCaseFilter, StopFilter, SynonymTokenFilter (don’t use), WordDelimiterFilter (SOLR only) Analyzers – WhitespaceAnalyzer, StandardAnalyzer, various language analyzers, PatternAnalyzer Analyzers almost always need to be customized.

31 Creating and Using TokenStream TokenStream tokenStream = new SomeTokenizer(…); tokenStream = new SomeTokenFilter1(tokenStream); tokenStream = new SomeTokenFilter2(tokenStream); CharTermAttribute charTermAtt = tokenStream.getAttribute(CharTermAttribute.class); OffsetAttribute offsetAtt = tokenStream.getAttribute(OffsetAttribute.class); while (tokenStream.incrementToken()) { charTermAtt  Now contains info on the token’s term offsetAtt.startOffset()  Now contains the token’s start offset }

32 Token Streams - How They Work TokenFilter Call incrementToken() TokenFilter Call incrementToken() Tokenizer Call incrementToken() Get next token from Reader() store in Attribute objects Return Modify attribute objects and return Modify attribute objects and return Use Attribute Objects

33 Creating and Using TokenStream DEMO

34 Replacement Pattern TokenFilter incrementToken() Call incrementToken() Modify attribute objects and return Token Filters Simply Modify Attributes that Pass Through

35 Token Filter – Replacement Pattern public final class LowerCaseFilter extends TokenFilter { public LowerCaseFilter(TokenStream input) { super(input); termAtt = (CharTermAttribute) addAttribute(CharTermAttribute.class); } private CharTermAttribute termAtt; public final boolean incrementToken() throws IOException { if (input.incrementToken()) { final char[] buffer = termAtt.buffer(); final int length = termAtt.length(); for(int i=0;i<length;i++) buffer[i] = Character.toLowerCase(buffer[i]); return true; } else return false; }

36 Deletion Pattern TokenFilter incrementToken() Call incrementToken() Token Filters Check Token Attributes and May Call incrementToken() Multiple Times Keep Looping Until A Good Token is Found Then Return It

37 Token Filter – Deletion Pattern public final class TokenLengthLessThan50CharsFilter extends TokenFilter { public TokenLengthLessThan50CharsFilter(TokenStream in) { super(in); termAtt = (CharTermAttribute) addAttribute(CharTermAttribute.class); posIncrAtt = (PositionIncrementAttribute) addAttribute(PositionIncrementAttribute.class); } private CharTermAttribute termAtt; private PositionIncrementAttribute posIncrAtt; public final boolean incrementToken() throws IOException { int skippedPositions = 0; while(input.incrementToken()) { final int length = termAtt.length(); if(length > 50) { skippedPositions += posIncrAtt.getPositionIncrement(); continue; } posIncrAtt.setPositionIncrement( posIncrAtt.getPositionIncrement() + skippedPositions); return true; } return false; }

38 Splitting Tokens Pattern – First Call TokenFilter incrementToken() Call incrementToken() When Splitting a Token, Save the Splits Aside For Later Saved Token Split Token Return First Half Save Second Half

39 Splitting Tokens Pattern – Second Call TokenFilter incrementToken() When Called the Second Time, Just Return Saved Token Saved Token Return Saved Token

40 Token Filter – Splitting Pattern public final class SplitDashFilter extends TokenFilter { public SplitDashFilter(TokenStream in) { super(in); termAtt = (CharTermAttribute) addAttribute(CharTermAttribute.class); } private CharTermAttribute termAtt; char[] saveToken = new char[100]; // Buffer to hold tokens from previous incrementToken() call int saveLen = 0; public final boolean incrementToken() throws IOException { if(saveLen > 0) { // Output previously saved token termAtt.setEmpty(); termAtt.append(new String(saveToken, 0, saveLen)); saveLen = 0; return true; } if (input.incrementToken()) { // Get a new token to split final char[] buffer = termAtt.buffer(); final int length = termAtt.length(); boolean foundDash = false; for(int i=0;i<length;i++) { // Scan token looking for ‘–’ to split it if(buffer[i] == ‘-’) { foundDash = true; termAtt.setLength(i); // Set length so termAtt = first half now } else if(foundDash) saveToken[saveLen++] = buffer[i]; // Save second half for later } return true; // Output first half right away } else return false; }

41 Token Splitting DEMO

42 Stemmers and Lemmatizers Stemmers available in Lucene – Snoball, Porter – They are both terrible [much too aggressive] – For example: mining  min Kstem – Publicly available stemmer with Lucene TokenFilter Implementation – Better, but still too aggressive: searchmining  searchmine Search Technologies Lemmatizer – Based on GCIDE Dictionary – Extremely accurate, only reduces words to dictionary entries – Also does irregular spelling reduction: mice  mouse – STILL A WORK IN PROGRESS: Needs one more refactor

43 ST Query Processing Lucene

44 Search Technologies Query Parser Originally written for GPO – Query  FAST FQL Converted to.Net for CPA Refactored for Lucene for Aspermont Refactored to be more componentized and pipeline-oriented for OLRC Still a work in progress – Lacks documentation, wiki, etc.

45 Search Technologies Query Processing Query Parser – Parses the user’s entered query Query Processing Pipeline – A sequence of query processing components which can be mixed and matched Lucene Query Builder Other Query Builders Possible – FAST, Google, etc. – No others implemented yet Query Configuration File – Holds query parsing and processing parameters

46 Our Own Query Processing: Why? Gives us more control – Can exactly meet user’s query syntax Exposes operators not available through Lucene Syntax – Example: before proximity operator “behind the scenes” query tweaking – Field weighting – Token merging: rio tinto  url:riotinto – Exact case and exact suffix matching – True lemmatization (not just stemming)

47 ST Query Parser – Overall Structure Parser Query String Processor Top Docs Generic “AQNode” Structures Processor Lucene Builder... Lucene Query Structures

48 The Search Technologies Query Structure userQuery nodeQuery finalQuery Query String Lucene Query Structures Generic AQNode Structures Holds references to all query representations Therefore, query processors can process any query representation Everything is a QueryProcessor – Parsing, processing, and query building

49 Query Parser: Features AND, OR, NOT, parenthesis – ((star and wars) or (star and trek)) – star and not born {broken} +, - + = query boost - = not {broken} Proximity operators – within/3, near/3, adj Phrases field: searches title:(star and wars) and description:(the original)

50 Using Query Processors Load Query Configuration QueryConfig qConfig = new QueryConfig("data/query-config.xml"); Create Query Processor IQueryProcessor iqp2 = new TokenizationQueryProcessor(); Initialize Query Processor iqp2.initialize(qConfig); Use Query Processors (simply call in sequence) iqp1.process(query); iqp2.process(query); iqp3.process(query);

51 Query Processors: Other Notes Types of processors: (off the shelf) – Lemmatizer, tokenization, lower case QueryParser and Query classes may need to be fully qualified com.searchtechnologies.queryprocessor.Query query = new com.searchtechnologies.queryprocessor.Query(queryString); Query Parser Only Splits on Whitespace star-trek or star-wars  or(star-trek,star-wars) Use TokenizationQueryProcessor to split fully  or(phrase(star,trek),phrase(star,wars))

52 ST Query Processor – Exercise 5 Add ST QueryProcessor to your Lucene Query Add Dependency to your pom.xml: – com.searchtechnologies: st-queryparser: 0.3 Add Processors – com.searchtechnologies.queryprocessor.QueryParser – TokenizationQueryProcessor() – LowercaseQueryProcessor() – LuceneQueryBuilder() Initialize Config, Construct Processors, Initialize Processors, Execute Processors

53 Creating Your Own Query Processor AQNode – “Aspire Query Node” – Operands – list of operands (references to other AQNodes) – Operator – Enumerated list (AND, OR, NEAR…) – Proximity window (int) – From value, to value (objects) Use from value for token strings Use from + to value for date ranges, int ranges, etc. – startChar, endChar (in original user’s query string) – Enclosing field name – Other stuff for future expansion Attached data objects Custom Query Builder

54 Query Processor Outline public class MyQueryProcessor implements IQueryProcessor { @Override public void initialize(QueryConfig config) throws QueryProcessorException { // Read any parameters you need from the config // config is an AXML (a wrapper around a W3C DOM object) } @Override public void process(Query query) throws QueryProcessorException { // Process the query // query.getNodeQuery()  The AQNode version of the query // query.getUserQuery()  The original query string // query.getFinalQuery()  The final (typically Lucene) query structure }

55 Query Processor Example public class LowercaseQueryProcessor implements IQueryProcessor { @Override public void initialize(QueryConfig config) throws QueryProcessorException { } @Override public void process(Query query) throws QueryProcessorException { convertToLowerCaseAQNodes(query.getNodeQuery()); } void convertToLowerCaseAQNodes(AQNode aqn) { if(aqn.getOperator() == AQNode.OperatorEnum.TERM) { String termText = (String)aqn.getFromValue(); aqn.setFromValue(termText.toLowerCase()); return; } if(aqn.getOperands() == null) return; for(AQNode childAqn : aqn.getOperands()) { convertToLowerCaseAQNodes(childAqn); }

56 ST Query Processor – Exercise 6 Copy the FixStarQueryProcessor – Looks for “sta” and changes them to “star” Fill out the contents of the QueryProcessor Add the QueryProcessor to your query program Run the program and query on “sta” – Add to STQueryProcessorExcercise5.java

57 Query Processing New Features Template Substitution (OLRC) – field:() searches are substituted for arbitrary query expressions Lemmatization (OLRC, BNA) Wildcard Handling (OLRC) Refactor Aspermont Query Processors – Semantic Network Expansion (ontology) – Add boost/reduce tokens (field:HI, field:LO) – Proximity boost – Composite fields and query field boost

58 Custom Hit Collector Collect() method called for each matching doc – Should be fast – Throw exception to break out of loop – Relation to Scorer DecadesCollector – Custom collector to take the top scoring document from each decade One main collector that wraps one TopDocsScoreCollector per decade See Source DecadesCollector.java

59 Complete Open Source Search Engine


Download ppt "Lucene Open Source Search Engine. Lucene - Overview Complete search engine in a Java library Stand-alone only, no server – But can use SOLR Handles indexing."

Similar presentations


Ads by Google