我正在尝试删除此网站上的所有信息(“https://www.karl.com/experience/en/?yoox_storelocator_action=true&action=yoox_storelocator_get_all_stores”) 但我无法将其写入文件。我的文件甚至没有创建。这是我的代码:
import scrapy # Scraper
import json # JSON manipulation
import jsonpickle # Object serializer
class Karl(scrapy.Spider):
# Needed var
name = 'Karl' # Spider's name
url = "https://www.karl.com/experience/en/?yoox_storelocator_action=true&action=yoox_storelocator_get_all_stores"
start_url = [
url,
]
# Called from Scrapy itself
def parse(self, response):
filename = '%s.json' % self.name
response = json.loads(response.body)
response = jsonpickle.encode(response)
with open(filename, 'w') as f: # Save the JSON file created
f.write(response)
当我运行 scrapycrawl Karl 时,这些是我得到的最后几行:
2018-07-24 16:02:25 [scrapy.core.engine] INFO: Spider opened
2018-07-24 16:02:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0
pages/min), scraped 0 items (at 0 items/min)
2018-07-24 16:02:26 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-07-24 16:02:26 [scrapy.core.engine] INFO: Closing spider (finished)
2018-07-24 16:02:26 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 7, 24, 14, 2, 26, 861204),
'log_count/DEBUG': 1,
'log_count/INFO': 7,
'memusage/max': 54804480,
'memusage/startup': 54804480,
'start_time': datetime.datetime(2018, 7, 24, 14, 2, 26, 550318)}
你们能帮我吗?我使用 scrapy 已经有一段时间了,这是第一次发生这种情况。谢谢
最佳答案
您的蜘蛛中出现错误:start_url
应该是 start_urls
,另外您还需要一个变量 allowed_domains
。也不需要另外声明url
。
您的代码应该是:
class Karl(scrapy.Spider):
name = 'Karl'
start_urls = ["https://www.karl.com/experience/en/?yoox_storelocator_action=true&action=yoox_storelocator_get_all_stores"]
allowed_domains = "karl.com"
## Snip ##
您还可以使用 scrapy genspider
生成一个新的蜘蛛,它将使用默认模板,在这种情况下会很有帮助。
关于python - scrapy 的问题 - 没有抓取任何项目,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51501065/