我有一个Python应用程序,可以从公共(public)网站上抓取数百个PDF文件,并使用python库PyPDF2来解析它们。
在成功解析的数百个此类文件中,有一个文件让我感到胃痛。它有 18 页长。文件名是“bad.pdf”。可以看到here .
这是我的代码,它将解析文档:
$ virtualenv my_env
$ source my_env/bin/activate
(my_env) $ pip install PyPDF2==1.26.0
(my_env) $ python
>>> import PyPDF2
>>> def parse_pdf_doc():
>>> pdfFileObj = open('bad.pdf', 'rb')
>>> pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
>>> for curr_page_num in range(pdfReader.numPages):
>>> print 'curr_page_num = {}'.format(curr_page_num)
>>> pageObj = pdfReader.getPage(curr_page_num)
>>> print '\tPage Retrieved successfully'
>>> page_text = pageObj.extractText()
>>> print '\tText extracted successfully'
当我运行此代码时,它成功解析了前九页。但到了第十页,它就挂了。永远:
>>> parse_pdf_doc()
curr_page_num = 0
Page Retrieved successfully
Text extracted successfully
curr_page_num = 1
Page Retrieved successfully
Text extracted successfully
curr_page_num = 2
Page Retrieved successfully
Text extracted successfully
curr_page_num = 3
Page Retrieved successfully
Text extracted successfully
curr_page_num = 4
Page Retrieved successfully
Text extracted successfully
curr_page_num = 5
Page Retrieved successfully
Text extracted successfully
curr_page_num = 6
Page Retrieved successfully
Text extracted successfully
curr_page_num = 7
Page Retrieved successfully
Text extracted successfully
curr_page_num = 8
Page Retrieved successfully
Text extracted successfully
curr_page_num = 9
Page Retrieved successfully
<... hung here forever ...>
第 10 页有什么问题?让我们在查看器中打开它。哇哦:即使是 Google 文档也无法解析第 10 页。所以该页面肯定有一些损坏的地方:
但是,我仍然需要 PyPDF 抛出异常或以其他方式失败,而不仅仅是进入无限循环。它破坏了我的工作流程。我该如何解决 PDF 文件中这个损坏的页面?
最佳答案
下面的模板将让您了解如何实现这一目标。
from multiprocessing import Process
pdfFileObj = open('bad.pdf', 'rb')
for page in PDFPage.get_pages(pdfFileObj):
processTimeout = 20
extractTextProcess = Process(target=Function_to_extract_text, args=(pdfObject,page)
还可以通过with
关键字打开
文件(以防止内存泄漏)
关于python - 当 PyPDF2 正在解析的 PDF 损坏时,我可以让 PyPDF2 优雅地失败吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53690057/