Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Social Web or how flickr changed my life Kristina Lerman USC Information Sciences Institute

Similar presentations


Presentation on theme: "The Social Web or how flickr changed my life Kristina Lerman USC Information Sciences Institute"— Presentation transcript:

1 The Social Web or how flickr changed my life Kristina Lerman USC Information Sciences Institute http://www.isi.edu/~lerman

2 2 Web 1.0

3 3 Web 2.0

4 4 Elements of Social Web  Users contribute content Images (Flickr, Zoomr), news stories (Digg, Reddit), bookmarks (Delicious, Bibsonomy), videos (YouTube, Vimeo), …  Users add metadata to content Tags: annotate content with freely chosen keywords Discussion: leave comments Evaluation: active through voting or passive through views & favorites  Users create social networks Add other users as friends/contacts Sites provide an easy interface to track friends’ activities  Transparency Publicly navigable content and metadata

5 5 Flickr tags submitter discussion image stats

6 6 User profile

7 7 User’s tags  Tags are keyword- based metadata added to content Help users organize their own data Facilitate searching and browsing for information Freely chosen by user

8 8 User’s favorite images (by other photographers)

9 9 So what? By exposing human activity, Social Web allows users to exploit the intelligence and opinions of others to solve problems New way of interacting with information −Social Information Processing Exploit collective effects −Word of mouth to amplify good information Amenable to analysis −Design optimal social information processing systems Challenge for AI: harness the power of collective intelligence to solve information processing problems

10 10 Outline for the rest of the talk User-contributed metadata can be used to solve following information processing problems  Discovery Collectively added tags used for information discovery  Personalization User-added metadata, in the form of tags and social networks, used to personalize search results  Recommendation Social networks for information filtering  Dynamics of collaboration Mathematical study of collaborative rating system

11 Discovery personalization recommendation dynamics of collaboration with: Anon Plangrasopchok

12 12 Information discovery  Goal: Automatically find resources that provide some functionality weather conditions, flight tracking, geocoding, …  Simpler goal: Find resources that provide the same functionality as the seed, e.g., http://flytecomm.comhttp://flytecomm.com Improve robustness of information integration applications Increase coverage of the applications  Approach: Leverage user-contributed tags to discover new resources similar to the seed

13 13 Anatomy of Delicious resource popular tags user tags user notes

14 14 Probabilistic approach  Find a compressed description of the source Extract “latent topics” in a collection of sources, using Probabilistic Generative Model  Compute pair-wise similarity between the seed and a source using compressed description Users Tags Sources Probabilistic Model Compute Source Similarity Compressed description Similar sources (sorted)

15 15 Alternative models U I NtNt D Z R T NtNt D Z R T U NbNb Z R T ITMpLSAMWA [Plangrasopchok & Lerman, in IIWeb ’ 07] [Hoffman, in UAI’99][Wu+, in WWW’06]

16 16 Datasets  Seed resources: flytecomm, geocoder, wunderground For each seed, retrieve the 20 popular tags For each tag, retrieve other resources annotated with same tag For each resource, retrieve all resource-user-tag triples flytecommgeocoderwunderground Resources3,5625,5727,176 Tags14,29716,88777,056 Users34,59446,76445,852

17 17 Experimental results  # of sources with similar functionality to the seed found pLSA – ignores users MWA – naïve Bayes ITM – our model (user interests and source topics) Google – ‘find similar pages’ Plangrasopchok & Lerman, “Exploiting Social Annotation for Resource Discovery” in AAAI IIWeb workshop, 2007

18 18 Summary and future work  Exploit tagging activities of different users to find data sources similar to the seed  Future work Extend the probabilistic model to learn topic hierarchies (aka folksonomies) −Travel Flights »Booking »Status Hotels »Booking »Reviews Car rentals Destinations

19 discovery Personalization recommendation dynamics of collaboration with: Anon Plangrasopchok & Michael Wong

20 20 Image search on Flickr Tag search finds all images tagged with a given keyword … It is prone to ambiguity  Beetle Insect Car model  Tiger Panthera tigris House cat Shark (tiger shark) Mac OS X Flower (tiger lily)  Newborn Baby Kitten Puppy Etc…

21 21 Plain tag search QuerySenseRelevantPrecision newbornbaby4120.82 tigerPanthera tigris3370.67 beetleinsect2320.46 Relevance results for top 500 images retrieved by tag search Relevance results for top 500 images retrieved by tag search (manually labeled using the first sense of each keyword)

22 22 Personalizing search results Users express their tastes and preferences through the metadata they create Contacts they add to their social networks Tags they add to their own images Images they mark as their favorite Groups they join Use this metadata to improve image search results! Personalizing by tags Personalizing by contacts Level 1 contacts) −Restrict results of image search to those images that were submitted by user u ‘s friends (Level 1 contacts)

23 23 # L1+L2PrRe 49,5390.85 10,9700.900.77 13,1530.890.79 8,4390.910.75 13,1420.780.76 14,4250.760.79 7,2700.790.67 7,0730.790.71 53,4800.490.93 41,5680.490.90 62,6100.490.94 14,3240.520.70 Personalizing by contacts results User#L 1rel.Not rel.PrRe newborn user171923201.000.56 user215416901.000.41 user317414701.000.36 user412813201.000.32 tiger user5631110.920.03 user61037830.960.23 user7626510.980.19 user8563000.970.09 beetle user94451810.950.08 user103642580.810.15 user1178378250.750.34 user12102710.880.03 L1+L2: 9%-16% average improvement in precision

24 24 Personalizing by tags  Users often add descriptive metadata to images Tags Titles Image descriptions Add image to groups  Personalizing by tags Find (hidden) topics of interest to the user Find images in the search results related to these topics

25 25 Probabilistic topic model  Tagging as a stochastic process User u posts an image i Based on u ’s interests, topics z are chosen Tag t is selected based on z  Probabilistic topic model  Use EM to estimate p(t|z) and p(z|u) from data To find topics in each search set of 4500 images U Z T NtNt I

26 26 p(t|z) Topic 1Topic 2Topic 5Topic 8Topic 10 tiger zoospecanimalcatapplelion animalanimalkingdomelitekittymacdog natureabigfavecuteosxshark animalsflowerkittenmacintoshnyc wildbutterflycatsscreenshotcat tijgermacroorangemacosxman wildlifeyelloweyesdesktoppeople ilovenatureswallowtailpetimacarizona cublilytabbystevejobsrock siberiantigergreenstripesdashboardbeach blijdorpcanonwhiskersmacbooksand londoninsectwhitepowerbooksleeping australianatureartostree portfoliopinkfeline104forest “tiger” image set: 4500 images trained on 10 topics

27 27 Personalizing by tags: Results Precision of N top ranked search results, compared to plain search 4 users chosen to be interested in the first sense of search term Plain search – Flickr’s ordering of search results Lerman et al., “Personalizing Image Search Results on Flickr” in AAAI ITWP workshop, 2007 newborn beetle

28 28 Summary & future work  Improve results of image search for an individual user as long as the user has expressed interest in the topic of search  Future work Lots of other metadata to exploit −Favorites, groups, image titles and descriptions Discover relevant synonyms to expand search Topics that are new to the user? −Exploit collective knowledge to find communities of interest −Identify authorities within those communities

29 discovery personalization Recommendation dynamics of collaboration with: Dipsy Kapoor

30 30 Social News Aggregation on Digg  Users submit stories  Users vote on (digg) stories Select stories promoted to the front page based on received votes Collaborative front page emerges from the opinions of many users, not few editors social networks  Users create social networks by adding others as friends Friends Interface makes it easy to track friends’ activities −Stories friends submitted −Stories friends dugg (voted on)

31 31 Top users  Digg ranks users Based on how many of their stories were promoted to front page −User with most stories is ranked #1, …  Top 1000 users data Collected by scraping Digg … now available through the API Usage statistics −User rank −How many stories user submitted, dugg, commented on Social networks −Friends: outgoing links A  B := B is a friend of A −Reverse friends: incoming links A  B := A is a reverse friend of B

32 32 Digg datasets  To see how votes change in time Tracked 2858 stories submitted over a period > day in May 2006 Only 98 stories were promoted to the front page  To see how users vote on stories For ~200 front page stories −Names of users who voted on (dugg) the story

33 33 Dynamics of votes Top users’ stories

34 34 `Interestingness’ distribution Top users are not submitting the most “interesting” stories 50 stories from 14 users ave. max votes=600 48 stories from 45 users ave. max votes=1050

35 35 Social filtering as recommendation Social filtering explains why top users are so successful  Users express their preferences by creating social networks  Use these networks – through the Friends Interface – to find new stories to read Claim 1: Users digg stories their friends submit Claim 2: Users digg stories their friends digg

36 36 Social network on Digg Top 1000 Digg users

37 37 How Friends interface works submitter ‘see stories my friends submitted’ … … ‘see stories my friends dugg’

38 38 Users digg stories submitted by friends Number of diggs coming from submitter’s friends Probability that that many friends dugg a story by chance is P=0.005 num reverse friends num diggs from friends Lerman, “Social Browsing & Information Filtering in Social Media” submitted to JCMC

39 39 Users digg stories their friends digg Combined social network size of the first m diggers and number of diggs coming from users within the combined network After m diggsProbability m=1P=0.005 m=6P=0.028 m=16P=0.060 Probability such numbers could have been observed by chance

40 40 `Tyranny of the minority’ Top users submit lion’s share of front page stories  Explained by social filtering Top users have bigger, more active social networks  Conspiracy: alternative explanation of top user success Top users accused of colluding to automatically promote each other’s stories Resulting uproar led Digg to change its story promotion algorithm … −To discount votes coming from friends −Led to greater front page diversity, but also unintended consequences

41 41 Effect of the new promotion algorithm  Intended effects Greater user diversity on the front page Smaller spread in story interestingness  Unintended consequences Discourage users from joining social networks Alienating top users 36 stories from 24 users ave. max votes=960 35 stories from 35 users ave. max votes=1270

42 42 Design of collaborative rating systems  Designing a collaborative rating system, which exploits the emergent behavior of many independent evaluators, is difficult Small changes can have big consequences Few tools to predict system behavior −Execution −Simulation before  Can we explore the effects of promotion algorithms before they are implemented?

43 discovery personalization recommendation Dynamics of collaboration with: Dipsy Kapoor

44 44 Analysis as a design tool understandpredict Mathematical analysis can help understand and predict the emergent behavior of collaborative information systems Study the choice of the promotion algorithm before it is implemented Effect of design choices on system behavior −story timeliness, interestingness, user participation, incentives to join social networks, etc.

45 45 Dynamics of collaborative rating Story is characterized by  Interestingness r probability a story will received a vote when seen by a user  Visibility Visibility on the upcoming stories page −Decreases with time as new stories are submitted Visibility on the front page −Decreases with time as new stories are promoted Visibility through the friends interface −Stories friends submitted −Stories friends dugg (voted on)

46 46 Mathematical model  Mathematical model describes how the number of votes m(t) changes in time  Solve equation Solutions parametrized by S, r Other parameters estimated from data

47 47 Dynamics of votes data model Lerman, “Social Information Processing in Social News Aggregation” Internet Computing (in press) 2007

48 48 Exploring the parameter space Minimum S required for the story to be promoted for a given r for a fixed promotion threshold Time taken for a story with r and S to be promoted to the front page for a fixed promotion threshold

49 49 Dynamics of user influence  Digg ranked users according to how many front page stories they had  Model of the dynamics of user influence Number of stories promoted to the front page F User’s social network growth S user1user2user3 user4user5user6

50 50 Model of rank dynamics  Number of stories promoted to the front page F Number of stories M submitted over  t =week User’s promotion success rate  S(t)  User’s social network S grows as Others discover him through new front page stories ~  F Others discover him through the Top Users list ~ g(F)  Solve equations Estimate b, c, g(F) from data

51 51 Solutions 1 user2 data user6 data user2 model user6 model Lerman, “Dynamics of Collaborative Rating of Information” in KDD/SNA workshop, 2007

52 52 Solutions 2 user1 data user5 data user1 model user5 model Lerman, “Dynamics of Collaborative Rating of Information” in KDD/SNA workshop, 2007

53 53 Solutions 3 user3 data user4 data user3 model user4 model Lerman, “Dynamics of Collaborative Rating of Information” in KDD/SNA workshop, 2007

54 54 Previous works Technologies that exploit independent activities of many users for information discovery and recommendation  Collaborative filtering [e.g., Grouplens project 1997-present] Users express opinions by rating many products System finds users with similar opinions and recommends products liked by those users Product recommendation used by Amazon & Netflix −Users reluctant to rate products  Social navigation [Dieberger et al, 2000] Exposes activity of others to help guide users to quality information sources −“N users found X helpful” −best seller lists, “what’s popular” pages, etc.

55 55 Conclusions  In their every day use of Social Web sites, users create large quantity of data, which express their knowledge and opinions Content −Articles, media content, opinion pieces, etc. Metadata −Tags, ratings, discussion, social networks Links between users, content, and metadata  Social Web enables new problem solving approaches Social Information Processing −Use knowledge, opinions, work of others for own information needs Collective problem solving −Efficient, robust solutions beyond the scope of individual capabilities

56 56 Upcoming events  Social Information Processing Symposium When: March 2008 Where: AAAI Spring Symposium series @ Stanford Organizers: K. Lerman, B. Huberman (HP Labs), D. Gutelius (SRI), S. Merugu (Yahoo) http://www.isi.edu/~lerman/sss07/

57 57 The future of the Social Web 2 datapeople Instead of connecting data, the Web connects people  New applications Collaboration tools −Collective intelligence: −Collective intelligence: A large group of connected individuals acts more intelligently than individuals on their own The personalization of everything −The more the system learns about me, the better it should filter Discovery, not search −What papers do I need to read to know about the research on social networks? Identifying emerging communities −Community-based vocabulary −Authoritative sources within the community

58 58 The future of the Social Web algorithmsCollective Intelligence New challenge for AI: Instead of ever cleverer algorithms, harness the Collective Intelligence  Semantic Web vision [Berners-Lee & Hendler in Scientific American, 2001] Web content annotated with machine-readable metadata (a formal classification system) to aid automatic information integration Still unrealized in 2007 −Too complicated: specialized training to be used effectively −Costly and time-consuming to produce −Variety of specialized ontologies: ontology alignment problem  Folksonomy “user generated taxonomy used to categorize and retrieve web content using open-ended labels called tags.” [source: Wikipedia]user generatedtaxonomycategorizeretrieveweb contenttags −Bottom-up: decentralized, emergent, scalable −Dynamic: adapts to changing needs and priorities −Noisy: need tools to extract meaning from data


Download ppt "The Social Web or how flickr changed my life Kristina Lerman USC Information Sciences Institute"

Similar presentations


Ads by Google