按照我在这里编写代码的方式,我从不同的站点获得了结果,但由于某种原因该站点抛出错误。由于我是 scrapy 的新程序员,我没有能力自己解决问题。 Xpath 没问题。我附上我在终端中看到的内容以及代码:
项目.py
import scrapy
class OlxItem(scrapy.Item):
Title = scrapy.Field()
Url = scrapy.Field()
olxsp.py
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class OlxspSpider(CrawlSpider):
name = "olxsp"
allowed_domains = ['olx.com.pk']
start_urls = ['https://www.olx.com.pk/']
rules = [Rule(LinkExtractor(restrict_xpaths='//div[@class="lheight16 rel homeIconHeight"]')),
Rule(LinkExtractor(restrict_xpaths='//li[@class="fleft tcenter"]'),
callback='parse_items', follow=True)]
def parse_items(self, response):
page=response.xpath('//h3[@class="large lheight20 margintop10"]')
for post in page:
AA=post.xpath('.//a[@class="marginright5 link linkWithHash detailsLink"]/span/text()').extract()
CC=post.xpath('.//a[@class="marginright5 link linkWithHash detailsLink"]/@href').extract()
yield {'Title':AA,'Url':CC}
设置.py
BOT_NAME = 'olx'
SPIDER_MODULES = ['olx.spiders']
NEWSPIDER_MODULE = 'olx.spiders'
ROBOTSTXT_OBEY = True
最佳答案
您有
ROBOTSXTXT_OBEY = True
,它告诉 scrapy 检查其抓取的域的robots.txt
文件,以便它可以确定如何礼貌到这些网站。您在
allowed_domains = ['www.olx.com']
中允许使用与您实际抓取的域不同的域。如果您只想抓取olx.com.pk
网站,请将allowed_domains
更改为['olx.com.pk']
。如果您实际上不知道要抓取哪些网站,只需删除allowed_domains
属性即可。
关于python - Scrapy抛出属性错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43419370/