我正在尝试使用 Rule 类转到我的爬虫中的下一页。这是我的代码
from scrapy.contrib.spiders import CrawlSpider,Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from crawler.items import GDReview
class GdSpider(CrawlSpider):
name = "gd"
allowed_domains = ["glassdoor.com"]
start_urls = [
"http://www.glassdoor.com/Reviews/Johnson-and-Johnson-Reviews-E364_P1.htm"
]
rules = (
# Extract next links and parse them with the spider's method parse_item
Rule(SgmlLinkExtractor(restrict_xpaths=('//li[@class="next"]/a/@href',)), follow= True)
)
def parse(self, response):
company_name = response.xpath('//*[@id="EIHdrModule"]/div[3]/div[2]/p/text()').extract()
'''loop over every review in this page'''
for sel in response.xpath('//*[@id="EmployerReviews"]/ol/li'):
review = Item()
review['company_name'] = company_name
review['id'] = str(sel.xpath('@id').extract()[0]).split('_')[1] #sel.xpath('@id/text()').extract()
review['body'] = sel.xpath('div/div[3]/div/div[2]/p/text()').extract()
review['date'] = sel.xpath('div/div[1]/div/time/text()').extract()
review['summary'] = sel.xpath('div/div[2]/div/div[2]/h2/tt/a/span/text()').extract()
yield review
我的问题是关于规则部分的。在此规则中,提取的链接不包含域名。例如,它会返回类似 “/Reviews/Johnson-and-Johnson-Reviews-E364_P1.htm”
如何确保我的抓取工具会将域附加到返回的链接?
谢谢
最佳答案
你可以肯定,因为这是 Scrapy 中链接提取器的默认行为(source code)。
此外,restrict_xpaths
参数不应指向 @href
属性,而应指向 a
元素或具有 a
元素作为后代。另外,restrict_xpaths
可以定义为字符串。
换句话说,替换:
restrict_xpaths=('//li[@class="next"]/a/@href',)
与:
restrict_xpaths='//li[@class="next"]/a'
此外,您需要从 SgmlLinkExtractor
切换到 LxmlLinkExtractor
:
SGMLParser based link extractors are unmantained and its usage is discouraged. It is recommended to migrate to LxmlLinkExtractor if you are still using SgmlLinkExtractor.
就个人而言,我通常使用 LinkExractor
快捷方式到 LxmlLinkExtractor
:
from scrapy.contrib.linkextractors import LinkExtractor
总而言之,这就是我在规则
中的内容:
rules = [
Rule(LinkExtractor(restrict_xpaths='//li[@class="next"]/a'), follow=True)
]
关于python - scrapy中规则类的使用方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29177480/