我正在使用 Scrapy 来抓取网站以获取所有页面,但我当前的代码规则仍然允许我获取不需要的 URL,例如除了帖子的主 URL 之外的评论链接,如“http://www.example.com/some-article/comment-page-1”。我可以在规则中添加什么来排除这些不需要的项目?这是我当前的代码:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.item import Item
class MySpider(CrawlSpider):
name = 'crawltest'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com']
rules = [Rule(SgmlLinkExtractor(allow=[r'/\d+']), follow=True), Rule(SgmlLinkExtractor(allow=[r'\d+']), callback='parse_item')]
def parse_item(self, response):
#do something
最佳答案
SgmlLinkExtractor
有一个名为 deny
的可选参数,这将仅在允许正则表达式为真且拒绝正则表达式为假时匹配规则
示例来自 docs :
rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),
# Extract links matching 'item.php' and parse them with the spider's method parse_item
Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),
)
也许您可以检查 url 是否不包含单词 comment
?
关于python - Scrapy - 排除不需要的 URL(喜欢评论),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16761435/