我正在尝试将大量 numpy 结构化数组作为数据集存储在 hdf5 文件中。
例如,
f['tree1'] = 结构化数组1
。
。
f['tree60000'] = 结构化数组60000
(大约有 60000 棵树),
在读取文件的大约 70% 时,出现错误 运行时错误:无法注册数据类型原子(无法插入重复键)
只有非常大的 ascii 文件(10e7 行,5gb)才会出现此问题。如果文件大约(10e6 行,500mb),则不会发生这种情况。如果我取出数据类型并将其存储为 numpy 字符串数组,也不会发生这种情况。
如果我停止读入文件中途,关闭终端,再次打开它,然后继续从中途开始读取文件到结尾(我保存结束的行号),我可以解决这个问题。我尝试在 python 函数本身中打开和关闭 hdf5 文件,但这不起作用。
dt = [
('scale', 'f4'),
('haloid', 'i8'),
('scale_desc', 'f4'),
('haloid_desc', 'i8'),
('num_prog', 'i4'),
('pid', 'i8'),
('upid', 'i8'),
('pid_desc', 'i8'),
('phantom', 'i4'),
('mvir_sam', 'f4'),
('mvir', 'f4'),
('rvir', 'f4'),
('rs', 'f4'),
('vrms', 'f4'),
('mmp', 'i4'),
('scale_lastmm', 'f4'),
('vmax', 'f4'),
('x', 'f4'),
('y', 'f4'),
('z', 'f4'),
('vx', 'f4'),
('vy', 'f4'),
('vz', 'f4'),
('jx', 'f4'),
('jy', 'f4'),
('jz', 'f4'),
('spin', 'f4'),
('haloid_breadth_first', 'i8'),
('haloid_depth_first', 'i8'),
('haloid_tree_root', 'i8'),
('haloid_orig', 'i8'),
('snap_num', 'i4'),
('haloid_next_coprog_depthfirst', 'i8'),
('haloid_last_prog_depthfirst', 'i8'),
('haloid_last_mainleaf_depthfirst', 'i8'),
('rs_klypin', 'f4'),
('mvir_all', 'f4'),
('m200b', 'f4'),
('m200c', 'f4'),
('m500c', 'f4'),
('m2500c', 'f4'),
('xoff', 'f4'),
('voff', 'f4'),
('spin_bullock', 'f4'),
('b_to_a', 'f4'),
('c_to_a', 'f4'),
('axisA_x', 'f4'),
('axisA_y', 'f4'),
('axisA_z', 'f4'),
('b_to_a_500c', 'f4'),
('c_to_a_500c', 'f4'),
('axisA_x_500c', 'f4'),
('axisA_y_500c', 'f4'),
('axisA_z_500c', 'f4'),
('t_by_u', 'f4'),
('mass_pe_behroozi', 'f4'),
('mass_pe_diemer', 'f4')
]
def read_in_trees(self):
"""Store each tree as an hdf5 dataset.
"""
with open(self.fname) as ascii_file:
with h5py.File(self.hdf5_name,"r+") as f:
tree_id = ""
current_tree = []
for line in ascii_file:
if(line[0]=='#'): #new tree
arr = np.array(current_tree, dtype = dt)
f[tree_id] = arr
current_tree = []
tree_id = line[6:].strip('\n')
else: #read in next tree element
current_tree.append(tuple(line.split()))
return
错误:
/Volumes/My Passport for Mac/raw_trees/bolshoi/rockstar/asciiReaderOne.py in read_in_trees(self)
129 arr = np.array(current_tree, dtype = dt)
130 # depth_sort = arr['haloid_depth_first'].argsort()
--> 131 f[tree_id] = arr
132 current_tree = []
133 first_line = False
/Library/Python/2.7/site-packages/h5py/_objects.so in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2458)()
/Library/Python/2.7/site-packages/h5py/_objects.so in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2415)()
/Library/Python/2.7/site-packages/h5py/_hl/group.pyc in __setitem__(self, name, obj)
281
282 else:
--> 283 ds = self.create_dataset(None, data=obj, dtype=base.guess_dtype(obj))
284 h5o.link(ds.id, self.id, name, lcpl=lcpl)
285
/Library/Python/2.7/site-packages/h5py/_hl/group.pyc in create_dataset(self, name, shape, dtype, data, **kwds)
101 """
102 with phil:
--> 103 dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds)
104 dset = dataset.Dataset(dsid)
105 if name is not None:
/Library/Python/2.7/site-packages/h5py/_hl/dataset.pyc in make_new_dset(parent, shape, dtype, data, chunks, compression, shuffle, fletcher32, maxshape, compression_opts, fillvalue, scaleoffset, track_times)
124
125 if data is not None:
--> 126 dset_id.write(h5s.ALL, h5s.ALL, data)
127
128 return dset_id
/Library/Python/2.7/site-packages/h5py/_objects.so in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2458)()
/Library/Python/2.7/site-packages/h5py/_objects.so in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2415)()
/Library/Python/2.7/site-packages/h5py/h5d.so in h5py.h5d.DatasetID.write (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5d.c:3260)()
/Library/Python/2.7/site-packages/h5py/h5t.so in h5py.h5t.py_create (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5t.c:15314)()
/Library/Python/2.7/site-packages/h5py/h5t.so in h5py.h5t.py_create (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5t.c:14903)()
/Library/Python/2.7/site-packages/h5py/h5t.so in h5py.h5t._c_compound (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5t.c:14192)()
/Library/Python/2.7/site-packages/h5py/h5t.so in h5py.h5t.py_create (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5t.c:15314)()
/Library/Python/2.7/site-packages/h5py/h5t.so in h5py.h5t.py_create (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5t.c:14749)()
/Library/Python/2.7/site-packages/h5py/h5t.so in h5py.h5t._c_float (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5t.c:12379)()
RuntimeError: Unable to register datatype atom (Can't insert duplicate key)
最佳答案
您收到错误堆栈吗?指示代码中的何处产生错误?
您报告:错误 RuntimeError:无法注册数据类型原子(无法插入重复键)
在/usr/lib/python3/dist-packages/h5py/_hl/datatype.py
class Datatype(HLObject):
# Represents an HDF5 named datatype stored in a file.
# >>> MyGroup["name"] = numpy.dtype("f")
def __init__(self, bind):
""" Create a new Datatype object by binding to a low-level TypeID.
我在这里抛出一个猜测。您的 dt
有 57 个术语。我怀疑每次向文件添加树
时,它都会将每个字段注册为新的数据类型
。
In [71]: (57*10e7*.7)/(2**32)
Out[71]: 0.9289942681789397
57 * 10e7 的 70% 接近 2*32。如果 Python/numpy 使用 int32 作为 dtype id,那么您可能会达到此限制。
我们必须在 h5py
或 numpy
代码中进行更多挖掘,才能找到发出此错误消息的人。
通过将数组添加到文件中:
f[tree_id] = arr
您将数据集中的每个数组放入一个新的Group
中。如果每个数据集都有一个数据类型,或者数组的每个字段的数据类型,您可以轻松获得 2*32 数据类型。
另一方面,如果您可以将多个 arr
存储到一个组或数据集中,您可能会避免注册数千种数据类型。我对 h5py
不太熟悉,无法建议您如何做到这一点。
我想知道这个序列是否可以为多个数据集重用数据类型:
dt1=np.dtype(dt)
gg= f.create_group('testgroup')
gg['xdtype']=dt1
# see h5py.Datatype doc
xdtype=gg['xdtype']
x=np.zeros((10,),dtype=xdtype)
gg['tree1']=x
x=np.ones((10,),dtype=xdtype)
gg['tree2']=x
按照Datatype
文档,我尝试注册一个命名数据类型,并将其用于添加到组中的每个数据集。
In [117]: isinstance(xdtype, h5py.Datatype)
Out[117]: True
In [118]: xdtype.id
Out[118]: <h5py.h5t.TypeCompoundID at 0xb46e0d4c>
因此,如果我正确读取 def make_new_dset
,则会绕过 py_create
调用。
关于python - 使用 h5py 创建大量数据集 - 无法注册数据类型原子(无法插入重复键),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31190573/