Download presentation
Presentation is loading. Please wait.
Published byAllison Curtis Modified over 9 years ago
1
Citation Recommendation 1 Web Technology Laboratory Ferdowsi University of Mashhad
2
Introduction Current Approaches Evaluation Methods References 2
3
When starting a work in a new research topic or brainstorming for novel ideas, a researcher have to be well aware of most recent improvement in the topic. Search for related work is an important part of writing papers Substantial effort is wasted in rediscover ideas 3
4
When papers are written, many times the author wants to make some citations at a place but he is not sure which papers to cite. the number of research paper published is exponentially growing. This filtering process is generally tedious and time consuming. 4
5
Two common ways to find reference papers are: 1. search documents on search engines such as Google. 2. trace the cited references by starting with a small number of initial papers (seed-papers). 5
6
We wish to have a recommendation system which can recommend Citations for papers. the user has already written a few pages about the topic, and is able to submit this document to the search system as the query. the user wants documents that the query document might cite. 6
7
recommender systems emerged as an independent research area in the mid-1990s Examples of such applications include recommending books, CDs, and other products at Amazon.com, movies by MovieLens and so on 7
8
The Collaborative Filtering Approach (CF) Content-based Recommendation Hybrid Approach 8
9
Works that can only recommend papers Works that can recommend papers for a specific position 9
10
10
11
map the citation graph onto a collaborative filtering ratings matrix. Co-Citation Matching 11
12
recommend items based on the contents of the items a user has experienced before. Text-based Analysis These approaches use NLP and text mining methods to find papers that are semantically similar to the input paper 12
13
1. candidate set : 1. the system retrieves the top 100 most similar papers to the query document and adds them to R (base set). 2. all papers cited by any paper in R are added to R. 2. Rank the candidate set 13
14
Using a weighted sum of feature scores: Features: ▪ Similar terms (Tf-Idf) ▪ Citation-count ▪ Author-h-index ▪ Venue-citation-count ▪ Cited using similar terms ▪ Similar topics Learn the feature weights 14
15
15
16
1. Candidate set D= document corpus ▪ {D Э d2 | d2= global context + a set of in-link context} LC100{outlink context to c*}+G1000{abstract +title to d1} 2. ranking 16
17
Input: a query manuscript without citation placeholders Output: where citation are needed a list of candidate article to be cited Finding citation context: Divide the query manuscript into sentences- overlapping window of 100 word Extract citation context of corpus ▪ Language model ▪ n-gram ▪ Contextual similarity ▪ Topical relevance 17
18
Multi-class SVM classifier Training and test data Training: ▪ Feature set: local context, global context, similarity features ▪ Input: citing paper ▪ Output: label of cited paper 18
19
Composed of two independent module: Content-base filtering Collaborative filtering 19
20
The CBF module uses the text of the active paper as input and the CF module uses the citations from the active paper as input. 20
21
Automatic a particular paper from the collection as a query and its citations as the relevant documents. ▪ Metrics: recall, precision, rank, coverage, co-cited probability, it is circular; system is attempting to improve the citing ability of authors, but evaluate with the papers that authors actually cite. ▪ System Might discover citations that are more relevant than the one held out. Such citations may have not been included in the paper’s references list because of limits on space or because they overlapped with other references, possibly the one left out. 21
22
Manual authors of papers rate the relevance of citations recommended for a paper they had written. A full manual evaluation of retrieval accuracy was not possible 22
23
He, Q., Pei, J., Kifer, D., Mitra, P., Giles, C.L., 2010, Context-aware Citation Recommendation, in Proceedings of the 19 th International World Wide Web Conference (WWW), pp. 421–430. Tang, J., Zhang, J., 2009, A Discriminative Approach to Topic-Based Citation Recommendations PAKDD'09. Gipp, B., Beel, J., Hentschel, C., 2009, Scienstein: A Research Paper Recommender System, in Proceedings of the International Conference on Emerging Trends in Computing (ICETiC’09), pp. 309-315, January 2009. Ritchie, A., 2008, Citation context analysis for information retrieval, PhD thesis, University of Cambridge Strohman, T., Croft, W. B., Jensen, D., 2007, Recommending citations for academic papers, in Proceedings of the 30th Annual ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)’, ACM Press, pp. 705–706. McNee, S., Albert, I., Cosley, D., Gopalkrishnan, P., Lam, S., Rashid, A., Konstan, J., Ried, J., 2002, On the Recommending of Citations for Research Papers. CSCW'02. 23
24
Schafer, B., Frankowski, D., Herlocker, J., Sen, S., 2007, Collaborative filtering recommender systems, In Brusilovsky, P., Kobsa, A., Nejdl, W., eds., The Adaptive Web: Methods and Strategies of Web Personalization. Lecture Notes in Computer Science, Vol. 4321, Berlin Heidelberg New York, Springer-Verlag. Gori, M., Pucci, A., 2006, Research Paper Recommender Systems: A Random-Walk Based Approach, in Proceedings of the 2006 International Conference on Web Intelligence, pp. 778-781. Kessler, M. M., 1963, Bibliographic coupling between scientific papers, American Documentation 14(1), 10–25. Small, H., 1973, Co-citation in the scientific literature: A new measurement of the relationship between two documents, Journal of the American Society of Information Science 24(4), 265–269. 24
25
25
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.