如果我的问题太微不足道,我很抱歉,但从今天早上开始我就陷入了困境...我是 scrapy 的新手,我已经阅读了文档,但我还没有找到答案...
我写了这个蜘蛛,当我打电话 parse_body
时在rules = (Rule(LinkExtractor(), callback='parse_body'),)
,确实如此:
tchatch = response.xpath('//div[@class="ProductPriceBox-item detail"]/div/a/@href').extract()
print('\n TROUVE \n')
print(tchatch)
print('\n DONE \n')
但是当我重命名代码中的所有位置时,函数 parse_body
仅由parse
,它就是这样:
print('\n EN FAIT, ICI : ', response.url, '\n')
看来我的scrapy.Request
请求永远不会被调用......
我什至打印了很多无用的东西来知道我的代码是否正在运行这些函数,但它除了 print
之外什么也不打印。上面写了。
请问有什么想法吗?
# -*- coding: utf-8 -*-
import scrapy
import re
import numbers
from fnac.items import FnacItem
from urllib.request import urlopen
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from bs4 import BeautifulSoup
class Fnac(CrawlSpider):
name = 'FnacCom'
allowed_domains = ['fnac.com']
start_urls = ['http://musique.fnac.com/a10484807/The-Cranberries-Something-else-CD-album']
rules = (
Rule(LinkExtractor(), callback='parse_body'),
)
def parse_body(self, response):
item = FnacItem()
nb_sales = response.xpath('//body//table[@summary="données détaillée du vendeur"]/tbody/tr/td/span/text()').re(r'([\d]*) ventes')
country = response.xpath('//body//table[@summary="données détaillée du vendeur"]/tbody/tr/td/text()').re(r'([A-Z].*)')
item['nb_sales'] = ''.join(nb_sales).strip()
item['country'] = ''.join(country).strip()
print(response.url)
test_list = response.xpath('//a/@href')
for test_list in response.xpath('.//div[@class="ProductPriceBox-item detail"]'):
tchatch = response.xpath('//div[@class="ProductPriceBox-item detail"]/div/a/@href').extract()
print('\n TROUVE \n')
print(tchatch)
print('\n DONE \n')
yield scrapy.Request(response.url, callback=self.parse_iframe, meta={'item': item})
def parse_iframe(self, response):
f_item1 = response.meta['item']
print('\n EN FAIT, ICI : ', response.url, '\n')
soup = BeautifulSoup(urlopen(response.url), "lxml")
iframexx = soup.find_all('iframe')
if (len(iframexx) != 0):
for iframe in iframexx:
yield scrapy.Request(iframe.attrs['src'], callback=self.extract_or_loop, meta={'item': f_item1})
else:
yield scrapy.Request(response.url, callback=self.extract_or_loop, meta={'item': f_item1})
def extract_or_loop(self, response):
f_item2 = response.meta['item']
print('\n PEUT ETRE ICI ? \n')
address = response.xpath('//body//div/p/text()').re(r'.*Adresse \: (.*)\n?.*')
email = response.xpath('//body//div/ul/li[contains(text(),"@")]/text()').extract()
name = response.xpath('//body//div/p[@class="customer-policy-label"]/text()').re(r'Infos sur la boutique \: ([a-zA-Z0-9]*\s*)')
phone = response.xpath('//body//div/p/text()').re(r'.*Tél \: ([\d]*)\n?.*')
siret = response.xpath('//body//div/p/text()').re(r'.*Siret \: ([\d]*)\n?.*')
vat = response.xpath('//body//div/text()').re(r'.*TVA \: (.*)')
if (len(name) != 0):
print('\n', name, '\n')
f_item2['name'] = ''.join(name).strip()
f_item2['address'] = ''.join(address).strip()
f_item2['phone'] = ''.join(phone).strip()
f_item2['email'] = ''.join(email).strip()
f_item2['vat'] = ''.join(vat).strip()
f_item2['siret'] = ''.join(siret).strip()
yield f_item2
else:
for sel in response.xpath('//html/body'):
list_urls = sel.xpath('//a/@href').extract()
list_iframe = response.xpath('//div[@class="ProductPriceBox-item detail"]/div/a/@href').extract()
if (len(list_iframe) != 0):
for list_iframe in list_urls:
print('\n', list_iframe, '\n')
print('\n GROS TCHATCH \n')
yield scrapy.Request(list_iframe, callback=self.parse_body)
for url in list_urls:
yield scrapy.Request(response.urljoin(url), callback=self.parse_body)
最佳答案
在CrawlSpider的scrapy文档中,有一个警告:
Warning
When writing crawl spider rules, avoid using
parse
as callback, since the CrawlSpider uses theparse
method itself to implement its logic. So if you override theparse
method, the crawl spider will no longer work.
你可以看看,这里是 link
关于python - scrapy.Request 不回调我的函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45075386/