590 Scraping – Social Web Topics Readings: Scrapy – pipeline.py

Slides:



Advertisements
Similar presentations
A Toolbox for Blackboard Tim Roberts
Advertisements

The Librarian Web Page Carol Wolf CS396X. Create new controller  To create a new controller that can manage more than just books, type ruby script/generate.
Chapter 14.3 LINQ to SQL Programming in Visual Basic 2010: The Very Beginner’s Guide by Jim McKeown Databases – Part 3.
How to win friends and influence people by using the new Oxygen system! or What the heck is Oxygen?
XP New Perspectives on Microsoft Office Excel 2003, Second Edition- Tutorial 11 1 Microsoft Office Excel 2003 Tutorial 11 – Importing Data Into Excel.
Chapter 4 Planning Site Navigation Principles of Web Design, 4 th Edition.
What is Dropbox ?– Dropbox is a file storage site which gives you an easy way to save your documents, files, and presentations online and access them from.
Struts 2.0 an Overview ( )
1 Spidering the Web in Python CSC 161: The Art of Programming Prof. Henry Kautz 11/23/2009.
Using a Simple Python Script to Download Data Rob Letzler Goldman School of Public Policy July 2005.
Programming Network Servers Topic 6, Chapters 21, 22 Network Programming Kansas State University at Salina.
JavaScript & jQuery the missing manual Chapter 11
Using Dreamweaver. Slide 1 Dreamweaver has 2 screens that do different things The Document window where you create your WebPages The Site window where.
Getting started on informaworld™ How do I register my institution with informaworld™? How is my institution’s online access activated? What do I do if.
© Copyright 2012 by Pearson Education, Inc. All Rights Reserved. Chapter 13 Files and Exception Handling 1.
Simple Pages for Omeka Lauren Dzura LIS
Overview of Previous Lesson(s) Over View  ASP.NET Pages  Modular in nature and divided into the core sections  Page directives  Code Section  Page.
XP New Perspectives on Browser and Basics Tutorial 1 1 Browser and Basics Tutorial 1.
London April 2005 London April 2005 Creating Eyeblaster Ads The Rich Media Platform The Rich Media Platform Eyeblaster.
London April 2005 London April 2005 Creating Eyeblaster Ads The Rich Media Platform The Rich Media Platform Eyeblaster.
Java Server Pages A JSP page is a text-based document that contains two types of text: static template data, which can be expressed in any text-based format,
An Introduction to Designing and Executing Workflows with Taverna Katy Wolstencroft University of Manchester.
INFO 344 Web Tools And Development CK Wang University of Washington Spring 2014.
Hélène kay Webdiva Consultants Dreamweaver Week 8 hélène kay hélène kay.
Module 10 Administering and Configuring SharePoint Search.
An Introduction to Designing and Executing Workflows with Taverna Aleksandra Pawlik materials by: Katy Wolstencroft University of Manchester.
Week 4 Planning Site Navigation. 2 Creating Usable Navigation Provide enough location information to let the user answer the following navigation questions:
Unit 3 Day 3 FOCS – Web Design. Journal Unit #3 Entry #2 What are (all of) the required tags in a html document? (hint: it’s everything that is in sample.html)
EndNote: The Next Steps Rebecca Starkey Reference Librarian The Joseph Regenstein Library
 Registry itself is easy and straightforward in implementation  The objects of registry are actually complicated to store and manage  Objects of Registry.
8 Chapter Eight Server-side Scripts. 8 Chapter Objectives Create dynamic Web pages that retrieve and display database data using Active Server Pages Process.
Python Let’s get started!.
 Packages:  Scrapy, Beautiful Soup  Scrapy  Website  
CPSC 8985 Fall 2015 P10 Web Crawler Mike Schmidt.
FILES AND EXCEPTIONS Topics Introduction to File Input and Output Using Loops to Process Files Processing Records Exceptions.
Core LIMS Training: Project Management
Update your Company Profile on TheYachtMarket
Project Management: Messages
Git & Github Timothy McRoy.
Play Framework: Introduction
Web Scraping with Scrapy
Next Generation SSIS Tasks and data Connection Series
Processes The most important processes used in Web-based systems and their internal organization.
Basic Web Scraping with Python
Introduction to project accounting
How to download prices and track price changes — competitive price monitoring and price matching.
How to Use Members Area of The Ninety-Nines Website
Crawling the Web for Job Knowledge
NCEES Process: End-of-Year for Teacher Evaluations
MVC Framework, in general.
Challenges in Network Troubleshooting In big scale networks, when an issue like latency or packet drops occur its very hard sometimes to pinpoint.
Topics Introduction to File Input and Output
Big Data on the Web News Gathering.
ENDNOTE Software – The Basics
Lesson 08: Files Topic: Introduction to Programming, Zybook Ch 7, P4E Ch 7. Slides on website.
Scrapy Web Cralwer Instructor: Bei Kang.
Web Scrapers/Crawlers
590 Scraping – NER shape features
CSCE 590 Web Scraping – Scrapy III
Nagendra Vemulapalli Exam2 Overview Nagendra Vemulapalli
CISC101 Reminders Assignment 3 due next Friday. Winter 2019
CSCE 590 Web Scraping – Scrapy II
Bryan Burlingame 17 April 2019
Bryan Burlingame 24 April 2019
Topics Introduction to File Input and Output
Mr. Justin “JET” Turner CSCI 3000 – Fall 2016 Section DA MW 4:05-5:20
Web Application Development Using PHP
Citing and Referencing 2 minute basics
Advanced Tips and Tricks
Presentation transcript:

590 Scraping – Social Web Topics Readings: Scrapy – pipeline.py Social web overview Readings: Srapy documentation Mining the Social Web April 20, 2017

Item Pipeline After an item has been scraped by a spider, it is sent to the Item Pipeline which processes it through several components that are executed sequentially. Each item pipeline component (sometimes referred as just “Item Pipeline”) is a Python class that implements a simple method. They receive an item and perform an action over it, also deciding if the item should continue through the pipeline or be dropped and no longer processed.

Typical uses of item pipelines are: cleansing HTML data validating scraped data (checking that the items contain certain fields) checking for duplicates (and dropping them) storing the scraped item in a database

Writing your own item pipeline Each item pipeline component is a Python class that must implement the following method: process_item(self, item, spider) This method is called for every item pipeline component. process_item() must either: return a dict with data, return an Item (or any descendant class) object, return a Twisted Deferred or raise DropItem exception. Dropped items are no longer processed by further pipeline components. Parameters: item (Item object or a dict) – the item scraped spider (Spider object) – the spider which scraped the item

Additional Methods open_spider(self, spider) This method is called when the spider is opened. Parameters: spider (Spider object) – the spider which was opened close_spider(self, spider) This method is called when the spider is closed. Parameters: spider (Spider object) – the spider which was closed from_crawler(cls, crawler) If present, this classmethod is called to create a pipeline instance from a Crawler. It must return a new instance of the pipeline. Crawler object provides access to all Scrapy core components like settings and signals; it is a way for pipeline to access them and hook its functionality into Scrapy

Item pipeline example -Price validation Let’s take a look at the following hypothetical pipeline that adjusts the price attribute for those items that do not include VAT (price_excludes_vat attribute), and drops those items which don’t contain a price: from scrapy.exceptions import DropItem   class PricePipeline(object): vat_factor = 1.15 def process_item(self, item, spider): if item['price']: if item['price_excludes_vat']: item['price'] = item['price'] * self.vat_factor return item else: raise DropItem("Missing price in %s" % item)

Item pipeline ex. -Write items to a JSON file import json class JsonWriterPipeline(object): def open_spider(self, spider): self.file = open('items.jl', 'wb') def close_spider(self, spider): self.file.close() def process_item(self, item, spider): line = json.dumps(dict(item)) + "\n" self.file.write(line) return item

Item pipeline example – Write items to MongoDB import pymongo class MongoPipeline(object): collection_name = 'scrapy_items' def __init__(self, mongo_uri, mongo_db): self.mongo_uri = mongo_uri self.mongo_db = mongo_db @classmethod def from_crawler(cls, crawler): return cls( mongo_uri=crawler.settings.get('MONGO_URI'), mongo_db=crawler.settings.get('MONGO_DATABASE', 'items') )

def open_spider(self, spider): self.client = pymongo.MongoClient(self.mongo_uri) self.db = self.client[self.mongo_db] def close_spider(self, spider): self.client.close() def process_item(self, item, spider): self.db[self.collection_name].insert(dict(item)) return item

Item pipeline example – Take screenshot of item This example demonstrates how to return Deferred from process_item() method. It uses Splash to render screenshot of item url. Pipeline makes request to locally running instance of Splash. After request is downloaded and Deferred callback fires, it saves item to a file and adds filename to an item. import scrapy import hashlib from urllib.parse import quote class ScreenshotPipeline(object): """Pipeline that uses Splash to render screenshot of every Scrapy item.""" SPLASH_URL = "http://localhost:8050/render.png?url={}"

def process_item(self, item, spider): encoded_item_url = quote(item["url"]) screenshot_url = self.SPLASH_URL.format(encoded_item_url) request = scrapy.Request(screenshot_url) dfd = spider.crawler.engine.download(request, spider) dfd.addBoth(self.return_item, item) return dfd def return_item(self, response, item): if response.status != 200: # Error happened, return item. return item

# Save screenshot to file, filename will be hash of url # Save screenshot to file, filename will be hash of url. url = item["url"] url_hash = hashlib.md5(url.encode("utf8")).hexdigest() filename = "{}.png".format(url_hash) with open(filename, "wb") as f: f.write(response.body) # Store filename in item. item["screenshot_filename"] = filename return item

Item pipeline example – Duplicates filter A filter that looks for duplicate items, and drops those items that were already processed. from scrapy.exceptions import DropItem class DuplicatesPipeline(object): def __init__(self): self.ids_seen = set() def process_item(self, item, spider): if item['id'] in self.ids_seen: raise DropItem("Duplicate item found: %s" % item) else: self.ids_seen.add(item['id']) return item

Activating an Item Pipeline component To activate an Item Pipeline component you must add its class to the ITEM_PIPELINES setting, like in the following example: ITEM_PIPELINES = { 'myproject.pipelines.PricePipeline': 300, 'myproject.pipelines.JsonWriterPipeline': 800, } INTEGERS DETERMINE ORDER OF RUNNING

Mining the Social Web GitHub repository containing the latest and greatest bug-fixed source code for this book is available at http:// bit.ly/ MiningTheSocialWeb2E. Russell, Matthew A.. Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More (Kindle Locations 101-102). O'Reilly Media. Kindle Edition.

Mining the Social Web

Mining the Social Web – GitHub Site

Take Home Questions Google to generate start_urls Write python code to Google Coach Staley and Scrape top 25 urls Hand select three to create start_urls (choose by easy in parsing) Separate from scrapy Run nltk relations extractor from NLTK book Chapter 07 section 3.6. Store results in json file Use selenium/phantom to scrape today’s NBA scores from http://www.espn.com/nba/scoreboard Use selenium/phantom with scrapy to scrape today’s NBA scores from http://www.espn.com/nba/scoreboard

Final Exam 20 minute slots Demo of project Oral Exam questions