我正在学习 Scrapy 官方教程,我应该从中抓取数据 http://quotes.toscrape.com ,本教程展示了如何使用以下蜘蛛抓取数据:
class QuotesSpiderCss(scrapy.Spider):
name = "quotes_css"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
def parse(self, response):
quotes = response.css('div.quote')
for quote in quotes:
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('small.author::text').extract_first(),
'tags': quote.css('div.tags::text').extract()
}
然后将蜘蛛抓取到一个 JSON 文件,它会返回预期的内容:
[
{"text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n "]},
{"text": "\u201cIt is our choices, Harry, that show what we truly are, far more than our abilities.\u201d", "author": "J.K. Rowling", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n "]},
{"text": "\u201cThere are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n "]},
...]
我正在尝试使用 xpath 而不是 css 编写相同的 Spider:
class QuotesSpiderXpath(scrapy.Spider):
name = 'quotes_xpath'
start_urls = [
'http://quotes.toscrape.com/page/1/'
]
def parse(self, response):
quotes = response.xpath('//div[@class="quote"]')
for quote in quotes:
yield {
'text': quote.xpath("//span[@class='text']/text()").extract_first(),
'author': quote.xpath("//small[@class='author']/text()").extract_first(),
'tags': quote.xpath("//div[@class='tags']/text()").extract()
}
但是这个蜘蛛返回给我一个包含相同引号的列表:
[
{"text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n "]},
{"text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n "]},
{"text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein", "tags": ["\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n Tags:\n ", " \n \n ", "\n \n ", "\n \n ", "\n \n "]},
...]
提前致谢!
最佳答案
您总是得到相同引用的原因是因为您没有使用相对 XPath。参见 documentation .
在您的 XPath 语句中添加前缀点,如以下解析方法:
def parse(self, response):
quotes = response.xpath('//div[@class="quote"]')
for quote in quotes:
yield {
'text': quote.xpath(".//span[@class='text']/text()").extract_first(),
'author': quote.xpath(".//small[@class='author']/text()").extract_first(),
'tags': quote.xpath(".//div[@class='tags']/text()").extract()
}
关于python - 在 Scrapy 中使用 CSS 和 Xpath 选择器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50306562/