python - 我的 scrapy 无法得到有效响应

标签 python xpath web-crawler scrapy

当我使用scrapy从'http://quote.eastmoney.com/stocklist.html获取一些股票信息时',我无法得到正确的回应。事实上,当我运行它时,我什么也没得到。 这是stocks.py的内容:

import scrapy
from scrapy.selector import Selector
import re

class StocksSpider(scrapy.Spider):
name = "stocks"

start_urls = ['http://quote.eastmoney.com/stocklist.html']

def parse(self, response):


    for i in Selector(response).xpath('//div[@id="quotesearch"]/ul/li/a/@href').extract():
        try:
            stock=re.split(r'[./]',i)[5]

            url='https://gupiao.baidu.com/stock/'+stock+'.html'
            yield scrapy.Rquest(url,callback=self.parse_stock)
        except:
            continue

def parse_stock(self,response):
    infoDict={}


    name=Selector(response).xpath('//a[@class="bets-name"]/text()').extract()[0]
    keylist=Selector(response).xpath('//dl/dt/text()').extract()

    for i in range(len(keylist)):
        try:
            val=Selector(response).xpath('//dl/dd/text()').extract()[0]
        except:
            val='--'
        infoDict[keylist[i]]=val
    infoDict.update({'股票名称':name[0].split()[0]+'('+Selector(response).xpath('//a[@class="bets-name"]/span/text()')[0].extract()[0]+')'})


    yield infoDict

这是我运行时得到的结果:

2017-06-05 20:28:32 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: BaiduStocks)
    2017-06-05 20:28:32 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'BaiduStocks', 'FEED_EXPORT_ENCODING': 'utf-8', 'NEWSPIDER_MODULE': 'BaiduStocks.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['BaiduStocks.spiders']}
2017-06-05 20:28:32 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2017-06-05 20:28:33 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-06-05 20:28:33 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-06-05 20:28:33 [scrapy.middleware] INFO: Enabled item pipelines:
['BaiduStocks.pipelines.BaidustocksInfoPipeline']
2017-06-05 20:28:33 [scrapy.core.engine] INFO: Spider opened
2017-06-05 20:28:33 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-06-05 20:28:33 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-06-05 20:28:33 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quote.eastmoney.com/robots.txt> (referer: None)
2017-06-05 20:28:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quote.eastmoney.com/stocklist.html> (referer: None)
2017-06-05 20:28:33 [scrapy.core.engine] INFO: Closing spider (finished)
2017-06-05 20:28:33 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 458,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 570201,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/404': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 6, 5, 12, 28, 33, 930937),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 6, 5, 12, 28, 33, 28477)}
2017-06-05 20:28:33 [scrapy.core.engine] INFO: Spider closed (finished)

我已经研究了几天了,但我不知道出了什么问题。所以我真的需要你的帮助。 谢谢你们!

最佳答案

在这里,我做了一个简短的代码审查:

import scrapy
from scrapy.selector import Selector
import re


class StocksSpider(scrapy.Spider):
    name = "stocks"

    start_urls = ['http://quote.eastmoney.com/stocklist.html']

    def parse(self, response):
        # response has a shortcut for selector
        for i in response.xpath('//div[@id="quotesearch"]/ul/li/a/@href').extract():
            # never silently catch and drop errors
            stock = re.split(r'[./]', i)[5]
            url = 'https://gupiao.baidu.com/stock/' + stock + '.html'
            yield scrapy.Request(url, callback=self.parse_stock)

    def parse_stock(self, response):
        # objects should be lowercase in python
        item = dict()
        # there's extract_first shortcut for extract()[0]
        name = response.xpath('//a[@class="bets-name"]/text()').extract_first('')
        keylist = response.xpath('//dl/dt/text()').extract()

        # for each is preferred loop style.
        for key in keylist:
            # extract_first allows a default argument to be set
            item[key] = response.xpath('//dl/dd/text()').extract_first('--')
        data = Selector(response).xpath('//a[@class="bets-name"]/span/text()').extract_first('') + ')'
        item['data'] = '{}({})'.format(name.split()[0], data)
        yield item

除了小问题之外,最大的问题是 parse() 方法中的 try/except 子句只是默默地退出,并且您有一个拼写错误 Rquest,所以蜘蛛就继续前进 - 由于这个原因,你永远不应该有无声的毯子异常:)

关于python - 我的 scrapy 无法得到有效响应,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44369710/

相关文章:

php - 抓取特定的页面和数据,使其可搜索

python - 如何使用 python 检查当前登录计算机的用户是什么?

python - 使用scrapy点击网站上的按钮

python - 为什么 django 在模板的注释 block 中发现错误?

javascript - JQuery 的 XPath 选择器

node.js - iOS 上的 Appium 编码问题

r - 根据特定模式抓取多个段落

python - Server可以读取scrapy发送过来的Request.Meta数据吗?

python - 是否可以使用 Python/BeautifulSoup 从一大块 HTML 中去除除 anchor /链接之外的所有标签?

Python:从一堆 "key: value"字符串创建字典?