我正在使用 Scipy 的 KDTree 实现来读取 300 MB 的大文件。现在,有没有一种方法可以将数据结构保存到磁盘并再次加载它,或者每次启动程序时我是否都坚持从文件中读取原始点并构建数据结构?我正在按如下方式构建 KDTree:
def buildKDTree(self):
self.kdpoints = numpy.fromfile("All", sep=' ')
self.kdpoints.shape = self.kdpoints.size / self.NDIM, NDIM
self.kdtree = KDTree(self.kdpoints, leafsize = self.kdpoints.shape[0]+1)
print "Preparing KDTree... Ready!"
有什么建议吗?
最佳答案
KDtree 使用嵌套类来定义其节点类型(innernode、leafnode)。 Pickle 仅适用于模块级类定义,因此嵌套类会使它出错:
import cPickle
class Foo(object):
class Bar(object):
pass
obj = Foo.Bar()
print obj.__class__
cPickle.dumps(obj)
<class '__main__.Bar'>
cPickle.PicklingError: Can't pickle <class '__main__.Bar'>: attribute lookup __main__.Bar failed
但是,有一个(hacky)解决方法,通过猴子将类定义修补到模块范围内的 scipy.spatial.kdtree
中,以便 pickler 可以找到它们。如果您读取和写入 pickle KDtree 对象的所有代码都安装了这些补丁,则此 hack 应该可以正常工作:
import cPickle
import numpy
from scipy.spatial import kdtree
# patch module-level attribute to enable pickle to work
kdtree.node = kdtree.KDTree.node
kdtree.leafnode = kdtree.KDTree.leafnode
kdtree.innernode = kdtree.KDTree.innernode
x, y = numpy.mgrid[0:5, 2:8]
t1 = kdtree.KDTree(zip(x.ravel(), y.ravel()))
r1 = t1.query([3.4, 4.1])
raw = cPickle.dumps(t1)
# read in the pickled tree
t2 = cPickle.loads(raw)
r2 = t2.query([3.4, 4.1])
print t1.tree.__class__
print repr(raw)[:70]
print t1.data[r1[1]], t2.data[r2[1]]
输出:
<class 'scipy.spatial.kdtree.innernode'>
"ccopy_reg\n_reconstructor\np1\n(cscipy.spatial.kdtree\nKDTree\np2\nc_
[3 4] [3 4]
关于python - 在 Python 中保存 KDTree 对象?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5773216/