Download presentation
Presentation is loading. Please wait.
Published byMaria Henderson Modified over 9 years ago
1
Evaluation of Google Co-op and Social Bookmarking at the Overseas Development Institute By Paul Matthews (p.matthews@odi.org.uk) and Arne Wunder (a.wunder@lse.ac.uk)
2
Background Knowledge management strategy at ODI Interest in Web 2.0 approaches: Communities of Practice share recommended sources and bookmarks Focuss.eu: Initiative of European development think tanks Growing popularity of social bookmarking, interest in usage within organisations
4
Objective 1 Comparative relevance assessment of specialised international development search engine Focuss.eu (using Google Custom Search) against Google web search http://www.focuss.eu/ –6600 Sources –23 Institutions
5
Objective 2 Investigate how staff use bookmarking and test a pilot intranet-based bookmarking system
6
Search engines: Research design No. of search engines compared2 (Google.com, Focuss.eu) Features of evaluationBlind relevance judgement of top eight live results, following hyperlinks was possible QueriesUser-defined; range of subjects: development policy Basic population127 No. of jurors (=sample)72 No. of queries1152 Average no. of search terms2.66 Impressions; items judged1023 Originators, jurorsODI staff (research, support) + other Focuss organisations Qualitative dimensionSemi-structured expert interviews to capture user narratives and general internet research behaviour
7
Search engines: Application Blind Relevance Ranking: 72 Test Subjects 1100 rankings 2.66 Words / Query
8
Search engines: Analysis (1) Mean relevance Total score for relevance ratings divided by number of ratings (2) Term-sensitive relevance Relevance comparison for searches using strictly development-related terms vs. “ambiguous” terms (3) Relevance v rank How does relevance relate to position in the results? (4) Direct case-by-case comparison Comparison of relevance scores for each query: Which search engines “wins”? (5) User narratives What role do search engines play in individual research strategies?
9
Search engines: Findings (1) Mean overall relevance Interpretation: Globally, Focuss slightly outperforms Google web search
10
Search engines: Findings (2) Relevance - Rank
11
Search engines: Findings (3) Term-sensitive relevance Interpretation: The strength of Focuss lies in dealing with relatively ambiguous terms.
12
Search engines: Findings (4) Direct case-by-case comparison Interpretation: Focuss outperforms Google web search in a significant number of searches, although this advantage is less clear in searches using strictly development related terms
13
Search engines: Findings from Interviews AspectComments Search engines Search engines are quick and convenient starting points, but much less appropriate for complex enquires. Search engines are excellent for “grey material” (news, contacts, policy documents, less formal research). Professional directories such as Eldis or ReliefWeb often provide a better, more structured introduction into specific development topic. Scholarly material Printed/electronic scholarly journals and books are most authoritative, but suffer from access/subscription issues. Contribution of Focuss Focuss appears to be a bit more targeted starting point than Google, but cannot replace other sources.
14
Search engines: Conclusion In most cases Focuss delivers “better” results than Google But still no guarantee that Focuss is always a better choice than Google Focuss’s strength is its context-specificitiy Search engines are quick and convenient starting points Doing good development research means more than just choosing the “right” search engine
15
Bookmarking: Design Survey of user requirements and behaviour Creation of bookmarking module for intranet (MS SharePoint) Usability testing Preliminary analysis
16
Bookmarking: Survey (n=18)
18
Bookmarking: Application
19
Bookmarking: Testing - task completion 1) Manual add (100%) 2) Favourites upload ( 60%) Non-standard chars in links Wrong destination URL 3) Bookmarklet (46%) Pop-up blockers IE security zones
20
Bookmarking: Testing - feedback 681 bookmarks, 633 URLS, 140 tags Usability problems What are incentives for and advantages of sharing? Preference for structured over free tagging Public v private bookmarking.
21
Bookmarking: Analysis Emergence of a long-tail folksonomy
22
Bookmarking: Conclusions Use of implicit taxonomy useful & time – saving User base technically unsophisticated Users want both order (taxonomy) and flexibility (free tagging) We need to prove the value of sharing & reuse (maybe harness interest in RSS)
23
What can you take from this? Not promoting specific engine or bookmarking system, but approaches may be useful to your own networks Community filters may help to mitigate information overload! http://www.focuss.eu/yourcontent.html http://www.google.com/coop
24
References Brophy, J. and D. Bawden (2005) ‘Is Google enough? Comparison of an internet search engine with academic library resources’. Aslib Proceedings Vol. 57(6): 498- 512. Kesselman, M. and S.B. Watstein (2005) ‘Google Scholar™ and libraries: point/counterpoint’. Reference Services Review Vol. 33(4): 380-387. Mathes, A. (2004) ‘Folksonomies - Cooperative Classification and Communication Through Shared Metadata’ Millen, D., Feinberg, J., and Kerr, B. (2005) 'Social bookmarking in the enterprise', ACM Queue 3 (9): 28-35.
25
Copyright note Copyright Paul Matthews and Arne Wunder 2007. This work is the intellectual property of the authors. Permission is granted for this material to be shared for non-commercial, educational purposes, provided that this copyright statement appears on the reproduced materials and notice is given that the copying is by permission of the authors. To disseminate otherwise or to republish requires written permission from the authors.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.