我正在尝试读取和写入 Mysql 的所有输出。当我的蜘蛛开始抓取时,我想从 MySQL 数据库获取所有 URL,因此我尝试创建一个函数来读取数据。
readdata.py:
import mysql.connector
from mysql.connector import Error
from itemadapter import ItemAdapter
def dataReader(marketName):
try:
connection = mysql.connector.connect(host='localhost',
database='test',
user='root',
port=3306,
password='1234')
sql_select_Query = "SELECT shop_URL FROM datatable.bot_markets WHERE shop_name='"+marketName+"';"
cursor = connection.cursor()
cursor.execute(sql_select_Query)
records = cursor.fetchall()
return records
except Error as e:
print("Error reading data from MySQL table", e)
finally:
if (connection.is_connected()):
connection.close()
cursor.close()
print("MySQL connection is closed")
我想从我的蜘蛛中调用这个函数,如下所示。
我的蜘蛛:
import scrapy
import re
import mysql.connector
from ..items import FirstBotItem
from scrapy.utils.project import get_project_settings
from first_bot.readdata import dataReader
class My_Spider(scrapy.Spider):
name = "My_Spider"
allowed_domains = ["quotes.toscrape.com/"]
start_urls = dataReader(name)
def parse(self, response):
location = "quotes"
for product in response.xpath('.//div[@class="product-card product-action "]'):
product_link = response.url
prices = product.xpath('.//div[@class="price-tag"]/span[@class="value"]/text()').get()
if prices != None:prices = re.sub(r"[\s]", "", prices)
title = product.xpath('.//h5[@class="title product-card-title"]/a/text()').get()
unit = product.xpath('.//div[@class="select single-select"]//i/text()').get()
if unit != None: unit = re.sub(r"[\s]", "", unit)
item = FirstBotItem()
item['LOKASYON'] = location
item['YEAR'] = 2020
item['MONTH'] = 8
yield item
我在 start_urls 上做错了什么,但我无法弄清楚。我收到此错误。
_set_url
raise TypeError('Request url must be str or unicode, got %s:' % type(url).__name__)
TypeError: Request url must be str or unicode, got tuple:
2020-08-24 15:46:31 [scrapy.core.engine] INFO: Closing spider (finished)
我的主要任务是从数据库中获取所有URL。因为有人会在同一个网站上写下URL,蜘蛛就会自动爬行。
最佳答案
您应该在 dataReader 函数中编写 return list(records)
而不是 return rows
。
关于python - 如何从 MySql 数据库读取 Scrapy Start_urls?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63561597/