CSCE 590 Web Scraping – XPaths

Slides:



Advertisements
Similar presentations
EIONET Training Zope Page Templates Miruna Bădescu Finsiel Romania Copenhagen, 28 October 2003.
Advertisements

Advanced XSLT II. Iteration in XSLT we sometimes wish to apply the same transform to a set of nodes we iterate through a node set the node set is defined.
Dr. Alexandra I. Cristea CS 253: Topics in Database Systems: XPath, NameSpaces.
Dr. Alexandra I. Cristea XPath and Namespaces.
Bottom-up Evaluation of XPath Queries Stephanie H. Li Zhiping Zou.
XML, XML Schema, Xpath and XQuery Slides collated from various sources, many from Dan Suciu at Univ. of Washington.
LEARN THE QUICK AND EASY WAY! VISUAL QUICKSTART GUIDE HTML and CSS 8th Edition Chapter 9: Defining Selectors.
Managing Data Exchange: XPath
Visual Web Information Extraction With Lixto Robert Baumgartner Sergio Flesca Georg Gottlob.
XSL Transformations Lecture 8, 07/08/02. Templates The whole element is a template The match pattern determines where this template applies Result element(s)
XML Language Family Detailed Examples Most information contained in these slide comes from: These slides are intended.
XPATH neral/examples.html.
XML Language Family Detailed Examples Most information contained in these slide comes from: These slides are intended.
Overview of XPath Author: Dan McCreary Date: October, 2008 Version: 0.2 with TEI Examples M D.
SD2520 Databases using XML and JQuery
M. Taimoor Khan * Java Server Pages (JSP) is a server-side programming technology that enables the creation of dynamic,
Navigating XML. Overview  Xpath is a non-xml syntax to be used with XSLT and Xpointer. Its purpose according to the W3.org is  to address parts of an.
CSE3201/CSE4500 XPath. 2 XPath A locator for elements or attributes in an XML document. XPath expression gives direction.
1/17 ITApplications XML Module Session 7: Introduction to XPath.
Introduction to XPath Web Engineering, SS 2007 Tomáš Pitner.
CSE3201/CSE4500 Information Retrieval Systems
XP New Perspectives on XML Tutorial 6 1 TUTORIAL 6 XSLT Tutorial – Carey ISBN
XP 1 CREATING AN XML DOCUMENT. XP 2 INTRODUCING XML XML stands for Extensible Markup Language. A markup language specifies the structure and content of.
WORKING WITH XSLT AND XPATH
XP New Perspectives on XML, 2 nd Edition Tutorial 10 1 WORKING WITH THE DOCUMENT OBJECT MODEL TUTORIAL 10.
1 XPath XPath became a W3C Recommendation 16. November 1999 XPath is a language for finding information in an XML document XPath is used to navigate through.
XP New Perspectives on Browser and Basics Tutorial 1 1 Browser and Basics Tutorial 1.
Python CGI programming
XPath Kanda Runapongsa Dept. of Computer Engineering Khon Kaen University.
XPath. Why XPath? Common syntax, semantics for [XSLT] [XPointer][XSLT] [XPointer] Used to address parts of an XML document Provides basic facilities for.
ECA 228 Internet/Intranet Design I XSLT Example. ECA 228 Internet/Intranet Design I 2 CSS Limitations cannot modify content cannot insert additional text.
Li Tak Sing COMPS311F. XPath XPath is a simple language that allows you to write expressions to refer to different parts of an XML document. We will learn.
CITA 330 Section 6 XSLT. Transforming XML Documents to XHTML Documents XSLT is an XML dialect which is declared under namespace "
1 XML Data Management Extracting Data from XML: XPath Werner Nutt based on slides by Sara Cohen, Jerusalem.
Information Retrieval and Web Search Crawling in practice Instructor: Rada Mihalcea.
University of Nottingham School of Computer Science & Information Technology Introduction to XML 2. XSLT Tim Brailsford.
CSE3201/CSE4500 XPath. 2 XPath A locator for items in XML document. XPath expression gives direction of navigation.
XSLT: How Do We Use It? Nancy Hallberg Nikki Massaro Kauffman.
 Packages:  Scrapy, Beautiful Soup  Scrapy  Website  
Introduction to JQuery COGS 187A – Fall JQuery jQuery is a JavaScript library, and allows us to manipulate HTML and CSS after the page has been.
Web Scraping with Python and Selenium. What is Web Scraping?  Software technique for extracting info from websites Get information programmatically that.
XSLT I Robin Burke ECT 360. Outline History / Terminology XSLT processing XSLT syntax XPath XSLT basics Lab.
LECTURE 13 Intro to Web Development. WEB DEVELOPMENT IN PYTHON In the next few lectures, we’ll be discussing web development in Python. Python can be.
5 Copyright © 2004, Oracle. All rights reserved. Navigating XML Documents by Using XPath.
XML Schema – XSLT Week 8 Web site:
Introduction to gathering and analyzing data via APIs Gus Cavanaugh
Data Scraping Presented by Stephen Popick & Chun Kuang (KC)
Beginning XML 4th Edition.
Lesson 14: Web Scraping TopHat Attendance
Recitation – Week 8 Pranut jain.
Web Scraping with Scrapy
{ XML Technologies } BY: DR. M’HAMED MATAOUI
Basic Web Scraping with Python
Crawling the Web for Job Knowledge
XML Path Language Andy Clark 17 Apr 2002.
Scrapy Web Cralwer Instructor: Bei Kang.
Web Scrapers/Crawlers
590 Web Scraping – testing Topics Readings: Chapter 13 - Testing
CSCE 590 Web Scraping – Scrapy II
590 Scraping – NER shape features
An Introduction to Animation
CSCE 590 Web Scraping – Scrapy III
CSCE 590 Web Scraping – Scrapy II
Web scraping tools, a real life application
How to debug a website using IE F12 tools
Bryan Burlingame 24 April 2019
CSCE 590 Web Scraping – Scrapy III
Scrapy Web Cralwer Instructor: Bei Kang.
More XML XML schema, XPATH, XSLT
590 Web Scraping – Test 2 Review
Presentation transcript:

CSCE 590 Web Scraping – XPaths Topics The Scrapy framework revisited Readings: Scrapy User manual – https://scrapy.org/doc/ https://doc.scrapy.org/en/1.3/ March 16, 2017

Overview Last Time Scrapy yet again Review of some slides from lecture 14 before the break Quotes-spider, author-spider, login-then-scrape Scrappy commands: startproject, genspider, shell, crawl Spider templates Debugging example from Stackoverflow Yahoo/finance – site analysis Today Scrapy one more time

Auxilliary lambda function to test xpaths Testing xpaths can be a lot of typing response.xpath('//ul[@class="directory-url"]/li') Instead define xp = lambda x: response.xpath(x).extract_first() Then instead you just have to type the path to test xp ('//ul[@class="directory-url"]/li')

https://scrapy.org Documentation – pdf and online FAQ - https://doc.scrapy.org/en/1.3/faq.html

Using BeautifulSoup with Scrapy from bs4 import BeautifulSoup import scrapy class ExampleSpider(scrapy.Spider): name = "example" allowed_domains = ["example.com"] start_urls = ( 'http://www.example.com/', ) def parse(self, response): # use lxml to get decent HTML parsing speed soup = BeautifulSoup(response.text, 'lxml') yield { "url": response.url, "title": soup.h1.string } https://doc.scrapy.org/en/1.3/faq.html

Build and run your web spiders pip install scrapy cat > myspider.py <<EOF import scrapy class BlogSpider(scrapy.Spider): name = 'blogspider' start_urls = ['https://blog.scrapinghub.com'] def parse(self, res): for title in res.css('h2.entry-title'): yield {'title': title.css('a ::text').extract_first()} next_page = res.css('div.prev-post > a::attr(href)'). extract_first () if next_page: … (next slide)

def parse(self, response): for title in response. css('h2 def parse(self, response): for title in response.css('h2.entry-title'): yield {'title': title.css('a ::text').extract_first()} next_page = response.css('div.prev-post > a ::attr(href)'). extract_first() if next_page: yield scrapy.Request(response.urljoin(next_page), callback=self.parse) EOF scrapy runspider myspider.py

Deploy them to Scrapy Cloud pip install shub shub login Insert your Scrapinghub API Key: <API_KEY> # Deploy the spider to Scrapy Cloud shub deploy # Schedule the spider for execution shub schedule blogspider Spider blogspider scheduled, watch it running here: https://app.scrapinghub.com/p/26731/job/1/8 # Retrieve the scraped data shub items 26731/1/8 {"title": "Improved Frontera: Web Crawling at Scale with Python 3 Support"} {"title": "How to Crawl the Web Politely with Scrapy"} ...

Architecture Overview

The data flow in Scrapy is controlled by the execution engine, and goes like this: The Engine gets the initial Requests to crawl from the Spider. The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl. The Scheduler returns the next Requests to the Engine. The Engine sends the Requests to the Downloader, passing through the Downloader Middlewares (see process_request()). Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see process_response()). The Engine receives the Response from the Downloader and sends it to the Spider for processing, passing through the Spider Middleware (see process_spider_input()). The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware (see process_spider_output()). The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl. The process repeats (from step 1) until there are no more requests from the Scheduler.

Xpath >>> response.xpath('//title') [<Selector xpath='//title' data='<title>Quotes to Scrape</title>'>] >>> response.xpath('//title/text()').extract_first() 'Quotes to Scrape' https://doc.scrapy.org/en/1.3/

XPATH Tutorial http://zvon.org/comp/r/tut-XPath_1.html#intro

List of XPaths XPath as filesystem addressing Start with // All elements: * Further conditions inside [ ] Attributes Attribute values Nodes counting Playing with names of selected elements Length of string Combining XPaths with | Child axis Descendant axis Parent axis Ancestor axis Following-sibling axis Preceding-sibling axis Following axis Preceding axis Descendant-or-self axis http://zvon.org/comp/r/tut-XPath_1.html

XPATH as filesystem – full vs relative <A> <B> <C> <B> <B> <D> <B> </D> <C> </A> /A /A/C /A/D/B //B //D/B /A/C/D/* http://zvon.org/comp/r/tut-XPath_1.html#intro

XPATH as filesystem – // and * <A> <X> <D> <B> <C> <B> <B> </D> </X> <D> <B> </D> </A> //D //D/B /A/C/D/* //D/* /*/*/*/B – B’s with 3 ancestors //* http://zvon.org/comp/r/tut-XPath_1.html#intro

XPATH -Further conditions inside [ ] <A> <X> <D> <B> <C> <B> <B> </D> </X> <D> <B> </D> </A> /A/X/D/B[1] /A/X/D/B[last()] http://zvon.org/comp/r/tut-XPath_1.html#intro

Attributes http://zvon.org/comp/r/tut-XPath_1.html

Any Attributes / no attributes http://zvon.org/comp/r/tut-XPath_1.html

Attribute values http://zvon.org/comp/r/tut-XPath_1.html

Node counting //*[count(BBB)=2] //*[count(*)=2] //*[count(*)=3] http://zvon.org/comp/r/tut-XPath_1.html

Playing with names of selected elements //*[name()='BBB'] //*[starts-with(name(),'B')] //*[contains(name(),'C')] //*[string-length(name()) = 3] //*[string-length(name()) < 3] http://zvon.org/comp/r/tut-XPath_1.html

Combining XPaths with | //CCC | //BBB /AAA/EEE | //BBB /AAA/EEE | //DDD/CCC | /AAA | //BBB http://zvon.org/comp/r/tut-XPath_1.html

Child axis Default /AAA /AAA/BBB http://zvon.org/comp/r/tut-XPath_1.html

Parent //DDD/parent::* Select all parents of a DDD element http://zvon.org/comp/r/tut-XPath_1.html

Ancestor axis http://zvon.org/comp/r/tut-XPath_1.html

http://zvon.org/comp/r/tut-XPath_1.html

List of XPaths XPath as filesystem addressing Start with // All elements: * Further conditions inside [] Attributes Attribute values Nodes counting Playing with names of selected elements http://zvon.org/comp/r/tut-XPath_1.html

Siblings /AAA/BBB/following-sibling::* //CCC/preceding-sibling::* http://zvon.org/comp/r/tut-XPath_1.html

Xpath syntax -Python 3 direct support Predicates (expressions within square brackets) must be preceded by a tag name, an asterisk, or another predicate. position predicates must be preceded by a tag name. 20.5.2.2. Supported XPath syntax Xpath syntax -Python 3 direct support https://docs.python.org/3/library/xml.etree.elementtree.html

Xpath syntax -Python 3 direct support Predicates (expressions within square brackets) must be preceded by a tag name, an asterisk, or another predicate. position predicates must be preceded by a tag name. 20.5.2.2. Supported XPath syntax Xpath syntax -Python 3 direct support https://docs.python.org/3/library/xml.etree.elementtree.html

xpathtester http://www.xpathtester.com/xpath

xml.py (emailed to you) import xml.etree.ElementTree as ET #tree = ET.parse('country_data.xml') #root = tree.getroot() a = ET.Element('a') b = ET.SubElement(a, 'b') c = ET.SubElement(a, 'c') d = ET.SubElement(c, 'd') ET.dump(a) https://docs.python.org/3/library/xml.etree.elementtree.html

Xml.py – test the xpath tutorial import xml.etree.ElementTree as ET my_string=""" <AAA> <BBB> <CCC/> <DDD/> </BBB> <XXX> <DDD> … </AAA> """ root = ET.fromstring(my_string) ET.dump(root) path=".//CCC/.." print(root.findall(path))

import scrapy from scrapy import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule class LoginSpider(CrawlSpider): name = 'login' allowed_domains = ['www.example.com'] rules = ( Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True), ) def start_requests(self): return [scrapy.FormRequest("http://www.example.com/login", formdata={'user': 'john', 'pass': 'secret'}, callback=self.logged_in)] def logged_in(self, response): # here you would extract links to follow and return Requests for # each of them, with another callback pass def parse_item(self, response): i = {} #i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract() #i['name'] = response.xpath('//div[@id="name"]').extract() #i['description'] = response.xpath('//div[@id="description"]').extract() return i Login then scrape

27.3. pdb — The Python Debugger Source code: Lib/pdb.py The module pdb defines an interactive source code debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control. The debugger is extensible – it is actually defined as the class Pdb. This is currently undocumented but easily understood by reading the source. The extension interface uses the modules bdb and cmd. The debugger’s prompt is (Pdb).