我有一个使用 scrapy 和 python 的网络爬虫,可以爬取大学的国际入学要求。在尝试让爬虫自动将结果添加到 MySQL 数据库之前,它运行良好并且能够提取我需要的所有信息。现在我已经创建了一个将结果自动添加到 MySQL 的管道,但由于某种原因,它错过了包含撇号的结果。我认为这与UTF-8编码有关。
只是为了澄清这完全有效,除了当页面包含撇号时它拒绝将该信息上传到 MySQL 之外,有人知道如何处理它吗?
我将为您提供我的一只蜘蛛和元素管道。谢谢。
布里斯托尔.py
from scrapy.spider import BaseSpider
from project.items import QualificationItem
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from urlparse import urljoin
USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0'
class recursiveSpider(BaseSpider):
name = 'bristol'
allowed_domains = ['bristol.ac.uk/']
start_urls = ['http://www.bristol.ac.uk/international/countries/']
def parse(self, response):
hxs = HtmlXPathSelector(response)
xpath = '//*[@id="all-countries"]/li/ul/li/a/@href'
a_of_the_link = '//*[@id="all-countries"]/li/ul/li/a/text()'
for text, link in zip(hxs.select(a_of_the_link).extract(), hxs.select(xpath).extract()):
yield Request(urljoin(response.url, link),
meta={'a_of_the_link': text},
headers={'User-Agent': USER_AGENT},
callback=self.parse_linkpage,
dont_filter=True)
def parse_linkpage(self, response):
hxs = HtmlXPathSelector(response)
item = QualificationItem()
xpath = """
//h2[normalize-space(.)="Entry requirements for undergraduate courses"]
/following-sibling::p[not(preceding-sibling::h2[normalize-space(.)!="Entry requirements for undergraduate courses"])]
"""
item['BristolQualification'] = hxs.select(xpath).extract()[1:]
item['BristolCountry'] = response.meta['a_of_the_link']
return item
管道.py
import sys
import MySQLdb
import MySQLdb.cursors
import hashlib
from scrapy.exceptions import DropItem
from scrapy.http import Request
class TestPipeline(object):
def __init__(self):
self.conn = MySQLdb.connect(
user='c1024403',
passwd='Beeph3',
db='c1024403',
host='ephesus.cs.cf.ac.uk',
)
self.cursor = self.conn.cursor()
def process_item(self, item, spider):
try:
if 'BristolQualification' in item:
self.cursor.execute("""INSERT INTO Bristol(BristolCountry, BristolQualification) VALUES ('{0}', '{1}')""".format(item['BristolCountry'], "".join([s.encode('utf-8') for s in item['BristolQualification']])))
elif 'BathQualification' in item:
self.cursor.execute("""INSERT INTO Bath(BathCountry, BathQualification) VALUES ('{0}', '{1}')""".format(item['BathCountry'], "".join([s.encode('utf-8') for s in item['BathQualification']])))
self.conn.commit()
return item
except MySQLdb.Error as e:
print "Error %d: %s" % (e.args[0], e.args[1])
items.py
from scrapy.item import Item, Field
class QualificationItem(Item):
BristolQualification = Field()
BristolCountry = Field()
BathQualification = Field()
BathCountry = Field()
最佳答案
您的代码遭受 SQL 注入(inject)。
首先,查看您的 SQL 并思考当撇号出现时会发生什么:
self.cursor.execute(
"""INSERT INTO Bristol(BristolCountry, BristolQualification) VALUES ('{0}', '{1})""".format(
item['BristolCountry'],
"".join([s.encode('utf-8') for s in item['BristolQualification']])))
第二,read this .
最终修复版本:
self.cursor.execute(
"INSERT INTO Bristol(BristolCountry, BristolQualification) VALUES (%s, %s)", (
item['BristolCountry'],
"".join([s.encode('utf-8') for s in item['BristolQualification']])
)
)
关于python - 如何处理 scrapy 和 MySQL 中的撇号?蜘蛛完全忽略了数据中具有 "'“的内容,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23249507/