我正在尝试从此类别页面上给出的所有(#123)详细信息页面中抓取一些属性 - http://stinkybklyn.com/shop/cheese/但 scrapy 无法遵循我设置的链接模式,我也检查了 scrapy 文档和一些教程,但运气不好!
下面是代码:
import scrapy
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
class Stinkybklyn(CrawlSpider):
name = "Stinkybklyn"
allowed_domains = ["stinkybklyn.com"]
start_urls = [
"http://stinkybklyn.com/shop/cheese/chandoka",
]
Rule(LinkExtractor(allow=r'\/shop\/cheese\/.*'),
callback='parse_items', follow=True)
def parse_items(self, response):
print "response", response
hxs= HtmlXPathSelector(response)
title=hxs.select("//*[@id='content']/div/h4").extract()
title="".join(title)
title=title.strip().replace("\n","").lstrip()
print "title is:",title
有人可以告诉我我在这里做错了什么吗?
最佳答案
您的代码的关键问题是您尚未设置 rules
对于CrawlSpider
。
我建议的其他改进:
- 无需实例化
HtmlXPathSelector
,您可以使用response
直接 -
select()
现已弃用,请使用xpath()
- 获取
text()
title
的元素以便检索,例如获取Chandoka
而不是<h4>Chandoka</h4>
- 我认为您应该从奶酪店目录页面开始:http://stinkybklyn.com/shop/cheese
应用了改进的完整代码:
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
class Stinkybklyn(CrawlSpider):
name = "Stinkybklyn"
allowed_domains = ["stinkybklyn.com"]
start_urls = [
"http://stinkybklyn.com/shop/cheese",
]
rules = [
Rule(LinkExtractor(allow=r'\/shop\/cheese\/.*'), callback='parse_items', follow=True)
]
def parse_items(self, response):
title = response.xpath("//*[@id='content']/div/h4/text()").extract()
title = "".join(title)
title = title.strip().replace("\n", "").lstrip()
print "title is:", title
关于python - Scrapy CrawlSpider 不关注链接,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30722486/