python - 碎片 : UNFORMATTABLE OBJECT WRITTEN TO LOG

标签 python scrapy

我已经坚持这个日志 3 天了:

2014-06-03 11:32:54-0700 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-06-03 11:32:54-0700 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-06-03 11:32:54-0700 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-06-03 11:32:54-0700 [scrapy] INFO: Enabled item pipelines: ImagesPipeline, FilterFieldsPipeline
2014-06-03 11:32:54-0700 [NefsakLaptopSpider] INFO: Spider opened
2014-06-03 11:32:54-0700 [NefsakLaptopSpider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-06-03 11:32:54-0700 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-06-03 11:32:54-0700 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-06-03 11:32:56-0700 [NefsakLaptopSpider] UNFORMATTABLE OBJECT WRITTEN TO LOG with fmt 'DEBUG: Crawled (%(status)s) %(request)s (referer: %(referer)s)%(flags)s', MESSAGE LOST
2014-06-03 11:33:54-0700 [NefsakLaptopSpider] INFO: Crawled 1 pages (at 1 pages/min), scraped 0 items (at 0 items/min)
2014-06-03 11:34:54-0700 [NefsakLaptopSpider] INFO: Crawled 1 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
More like the last line... Forever and very slowly

进攻线倒数第4,只有当我将Scrapy中的日志记录级别设置为DEBUG时才会出现。

这是我的蜘蛛的标题:

class ScrapyCrawler(CrawlSpider):
  name = "ScrapyCrawler"
  def __init__(self, spiderPath, spiderID, name="ScrapyCrawler", *args, **kwargs):
    super(ScrapyCrawler, self).__init__()
    self.name = name
    self.path = spiderPath
    self.id = spiderID
    self.path_index = 0
    self.favicon_required = kwargs.get("downloadFavicon", True) #the favicon for the scraped site will be added to the first item
    self.favicon_item = None

  def start_requests(self):
    start_path = self.path.pop(0)
    # determine the callback based on next step
    callback = self.parse_intermediate if type(self.path[0]) == URL \
          else self.parse_item_pages
    if type(start_path) == URL:
      start_url = start_path
      request = Request(start_path, callback=callback)
    elif type(start_path) == Form:
      start_url = start_path.url
      request = FormRequest(start_path.url, start_path.data, 
                          callback=callback)

    return [request]

  def parse_intermediate(self, response):
     ...

  def parse_item_pages(self, response):
     ...

问题是,在 start_requests() 之后没有调用任何回调。

这里有一个提示:start_request() 发出的第一个请求是针对类似http://www.example.com 的页面.如果我将 http 更改为 https,这将导致 scrapy 中的重定向并且日志更改为:

2014-06-03 12:00:51-0700 [NefsakLaptopSpider] UNFORMATTABLE OBJECT WRITTEN TO LOG with fmt 'DEBUG: Redirecting (%(reason)s) to %(redirected)s from %(request)s', MESSAGE LOST
2014-06-03 12:00:51-0700 [NefsakLaptopSpider] DEBUG: Redirecting (302) to <GET http://www.nefsak.com/home.php?cat=58> from <GET http://www.nefsak.com/home.php?cat=58&xid_be279=248933808671e852497b0b1b33333a8b>
2014-06-03 12:00:52-0700 [NefsakLaptopSpider] DEBUG: Redirecting (301) to <GET http://www.nefsak.com/15-17-Screen/> from <GET http://www.nefsak.com/home.php?cat=58>
2014-06-03 12:00:54-0700 [NefsakLaptopSpider] DEBUG: Crawled (200) <GET http://www.nefsak.com/15-17-Screen/> (referer: None)
2014-06-03 12:00:54-0700 [NefsakLaptopSpider] ERROR: Spider must return Request, BaseItem or None, got 'list' in <GET http://www.nefsak.com/15-17-Screen/>
2014-06-03 12:00:56-0700 [NefsakLaptopSpider] DEBUG: Crawled (200) <GET http://www.nefsak.com/15-17-Screen/?page=4> (referer: http://www.nefsak.com/15-17-Screen/)
More extracted links and more errors like above, then it finishes, unlike former log

从最后一行可以看出,蜘蛛实际上已经走了并提取了一个导航页面!。 All By Itself>。(有一个导航提取代码,但它不会被调用,因为从未到达调试器断点)。

不幸的是,我无法在项目外重现错误。一个类似的蜘蛛只是工作!但不在项目内部。

如果需要,我会提供更多代码。

谢谢,很抱歉发了这么长的帖子。

最佳答案

好吧,我有一个派生自内置 strURL 类。它是这样编码的:

class URL(str):

  def canonicalize(self, parentURL):
    parsed_self = urlparse.urlparse(self)
    if parsed_self.scheme:
      return self[:] #string copy?
    else:
      parsed_parent = urlparse.urlparse(parentURL)
      return urlparse.urljoin(parsed_parent.scheme + "://" + parsed_parent.netloc, self)

  def __str__(self):
    return "<URL : {0} >".format(self)

__str__方法在打印或者记录的时候导致无限递归,因为format()又调用了__str__...但是异常不知何故被twisted吞噬了。 仅当打印显示错误的响应时。

def __str__(self):
  return "<URL : " + self + " >" # or use super(URL, self).__str__()

:-)

关于python - 碎片 : UNFORMATTABLE OBJECT WRITTEN TO LOG,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24023085/

相关文章:

javascript - 使用正则表达式替换文本

javascript - Scrapy:POST 请求返回 JSON 响应(200 OK)但数据不完整

python - 如何将 map 与 urljoin 一起使用?

基于Scrapy的Python函数对整个网站进行爬取

scrapy - 避免重复的 URL 抓取

python - 将现有对象重用于不可变对象(immutable对象)?

python - 无法从 '_imaging' 导入名称 'PIL'

python - psycopg2 准备删除语句

python - 从断点列表中完全枚举范围

python - 使用 Python/Scrapy 处理返回 HTTP 500 代码的页面