python - 无法通过 scrapy 填写表格

标签 python forms web-crawler scrapy

我刚开始使用 scrapy,我想从房地产网站获取一些信息。 该站点有一个带有搜索表单(方法 GET)的主页。 我正在尝试转到我的 start_requests (recherche.php) 中的结果页面,并设置我在 formdata 参数的地址栏中看到的所有获取参数。 我也设置了我的 cookies ,但他也没有工作..

这是我的蜘蛛:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import FormRequest, Request

from robots_immo.items import AnnonceItem

class ElyseAvenueSpider(BaseSpider):
    name = "elyse_avenue"
    allowed_domains = ["http://www.elyseavenue.com/"]

    def start_requests(self):
        return [FormRequest(url="http://www.elyseavenue.com/recherche.php",
                            formdata={'recherche':'recherche',
                                      'compteurLigne':'2',
                                      'numLigneCourante':'0',
                                      'inseeVille_0':'',
                                      'num_rubrique':'',
                                      'rechercheOK':'recherche',
                                      'recherche_budget_max':'',
                                      'recherche_budget_min':'',
                                      'recherche_surface_max':'',
                                      'recherche_surface_min':'',
                                      'recherche_distance_km_0':'20',
                                      'recherche_reference_bien':'',
                                      'recherche_type_logement':'9',
                                      'recherche_ville_0':''
                                     },
                            cookies={'PHPSESSID':'4e1d729f68d3163bb110ad3e4cb8ffc3',
                                     '__utma':'150766562.159027263.1340725224.1340725224.1340727680.2',
                                     '__utmc':'150766562',
                                     '__utmz':'150766562.1340725224.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)',
                                     '__utmb':'150766562.14.10.1340727680'
                                    },
                            callback=self.parseAnnonces
                           )]



    def parseAnnonces(self, response):
        hxs = HtmlXPathSelector(response)
        annonces = hxs.select('//div[@id="contenuCentre"]/div[@class="blocVignetteBien"]')
        items = []
        for annonce in annonces:
            item = AnnonceItem()
            item['nom'] = annonce.select('span[contains(@class,"nomBienImmo")]/a/text()').extract()
            item['superficie'] = annonce.select('table//tr[2]/td[2]/span/text()').extract()
            item['prix'] = annonce.select('span[@class="prixVignette"]/span[1]/text()').extract()
            items.append(item)
        return items


SPIDER = ElyseAvenueSpider()

当我运行蜘蛛时,没有问题,但加载的页面不是很好(它说“请指定您的搜索”,但我没有得到任何结果..)

2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Spider opened
2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-06-26 20:04:54+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-06-26 20:04:54+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-06-26 20:04:54+0200 [elyse_avenue] DEBUG: Crawled (200) <POST http://www.elyseavenue.com/recherche.php> (referer: None)
2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Closing spider (finished)
2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Dumping spider stats:
    {'downloader/request_bytes': 808,
     'downloader/request_count': 1,
     'downloader/request_method_count/POST': 1,
     'downloader/response_bytes': 7590,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2012, 6, 26, 18, 4, 54, 924624),
     'scheduler/memory_enqueued': 1,
     'start_time': datetime.datetime(2012, 6, 26, 18, 4, 54, 559230)}
2012-06-26 20:04:54+0200 [elyse_avenue] INFO: Spider closed (finished)
2012-06-26 20:04:54+0200 [scrapy] INFO: Dumping global stats:
    {'memusage/max': 27410432, 'memusage/startup': 27410432}

感谢您的帮助!

最佳答案

我会使用 FormRequest.from_response()它为你做了所有的工作,因为你仍然可能会错过一些领域:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import FormRequest, Request

from robots_immo.items import AnnonceItem

class ElyseAvenueSpider(BaseSpider):

    name = "elyse_avenue"
    allowed_domains = ["elyseavenue.com"] # i fixed this
    start_urls = ["http://www.elyseavenue.com/"] # i added this

    def parse(self, response):
        yield FormRequest.from_response(response, formname='moteurRecherche', formdata={'recherche_distance_km_0':'20', 'recherche_type_logement':'9'}, callback=self.parseAnnonces)

    def parseAnnonces(self, response):
        hxs = HtmlXPathSelector(response)
        annonces = hxs.select('//div[@id="contenuCentre"]/div[@class="blocVignetteBien"]')
        items = []
        for annonce in annonces:
            item = AnnonceItem()
            item['nom'] = annonce.select('span[contains(@class,"nomBienImmo")]/a/text()').extract()
            item['superficie'] = annonce.select('table//tr[2]/td[2]/span/text()').extract()
            item['prix'] = annonce.select('span[@class="prixVignette"]/span[1]/text()').extract()
            items.append(item)
        return items

关于python - 无法通过 scrapy 填写表格,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11213467/

相关文章:

php - 使用 PHP 将表单数据发送/发布到 URL

java - 在java中使用正则表达式进行网页抓取

java - 如何在 StringTokenizer 中使用正则表达式

c# - 具有单独实例的 .NET 自定义线程池

python - 将请求 cookie 导出到 webdriver

python - Django 中图像/头像的自定义保存方法 - 如何使用

带有用户输入的 Python 3 单元测试

python - 即使 DAG 未运行, Airflow 变量也会更新

php - jQuery AJAX post 正在更改 post 数据中的字符

javascript - 尝试使用 onclick 事件提交表单,并在 formData 中附加值