python - selenium:socket.error: [Errno 61] 连接被拒绝

标签 python selenium selenium-webdriver web-scraping scrapy

我想抓取 10 个链接
当我运行spider时,我可以获取json文件中的链接,但是仍然有这样的错误:
好像 selenium 跑了两次。问题是什么?
请指导我谢谢

2014-08-06 10:30:26+0800 [spider2] DEBUG: Scraped from <200 http://www.test/a/1>
{'link': u'http://www.test/a/1'}
2014-08-06 10:30:26+0800 [spider2] ERROR: Spider error processing <GET
http://www.test/a/1>
Traceback (most recent call last):
 ........
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 571, in create_connection
    raise err
socket.error: [Errno 61] Connection refused

这是我的代码:

from selenium import webdriver
from scrapy.spider import Spider
from ta.items import TaItem
from selenium.webdriver.support.wait import WebDriverWait
from scrapy.http.request import Request

class ProductSpider(Spider):
    name = "spider2"  
    start_urls = ['http://www.test.com/']
    def __init__(self):
        self.driver = webdriver.Firefox()

    def parse(self, response):
        self.driver.get(response.url)
        self.driver.implicitly_wait(20)  
        next = self.driver.find_elements_by_css_selector("div.body .heading a")
        for a in next:
            item = TaItem()    
            item['link'] =  a.get_attribute("href")     
            yield Request(url=item['link'], meta={'item': item}, callback=self.parse_detail)  

    def parse_detail(self,response):
        item = response.meta['item']
        yield item
        self.driver.close()

最佳答案

问题是您过早关闭了驱动程序。

只有在蜘蛛完成工作后才应该关闭它,听spider_closed信号:

from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
from selenium import webdriver
from scrapy.spider import Spider
from ta.items import TaItem
from scrapy.http.request import Request


class ProductSpider(Spider):
    name = "spider2"  
    start_urls = ['http://www.test.com/']
    def __init__(self):
        self.driver = webdriver.Firefox()
        dispatcher.connect(self.spider_closed, signals.spider_closed)

    def parse(self, response):
        self.driver.get(response.url)
        self.driver.implicitly_wait(20)  
        next = self.driver.find_elements_by_css_selector("div.body .heading a")
        for a in next:
            item = TaItem()    
            item['link'] =  a.get_attribute("href")     
            yield Request(url=item['link'], meta={'item': item}, callback=self.parse_detail)  

    def parse_detail(self,response):
        item = response.meta['item']
        yield item

    def spider_closed(self, spider):
        self.driver.close()

另请参阅:scrapy: Call a function when a spider quits .

关于python - selenium:socket.error: [Errno 61] 连接被拒绝,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25151669/

相关文章:

java - Webdriver : Unable to enter value inside nested iframe, 框架集和框架

python - 如何从 AHK 脚本内部调用 python 函数?

python - 在 Flask/Python 中处理表情符号的正确方法是什么?

启动多线程时的 Python 线程问题

python - 根据数组中的信号执行计算

python - 为什么chrome webdriver配置无效?

python - 通过Python中的selenium驱动程序将图像导入谷歌表单

java - 我无法获取以下 html 代码的 xpath

java - 如何使用selenium和java在firefox中打开一个新的空选项卡

java - 无法启动Chrome驱动程序-macOS- Selenium Java-测试