我正在尝试在 2,79,900 个具有 5 个属性和 1 个类别的实例上运行 sklearn 随机森林分类。但是我在尝试在拟合线上运行分类时遇到内存分配错误,它无法训练分类器本身。关于如何解决此问题有什么建议吗?
数据a为
x,y,日,周,准确度
x 和 y 是坐标 day 是该月的哪一天 (1-30) 该周是一周中的哪一天 (1-7) 准确度是一个整数
代码:
import csv
import numpy as np
from sklearn.ensemble import RandomForestClassifier
with open("time_data.csv", "rb") as infile:
re1 = csv.reader(infile)
result=[]
##next(reader, None)
##for row in reader:
for row in re1:
result.append(row[8])
trainclass = result[:251900]
testclass = result[251901:279953]
with open("time_data.csv", "rb") as infile:
re = csv.reader(infile)
coords = [(float(d[1]), float(d[2]), float(d[3]), float(d[4]), float(d[5])) for d in re if len(d) > 0]
train = coords[:251900]
test = coords[251901:279953]
print "Done splitting data into test and train data"
clf = RandomForestClassifier(n_estimators=500,max_features="log2", min_samples_split=3, min_samples_leaf=2)
clf.fit(train,trainclass)
print "Done training"
score = clf.score(test,testclass)
print "Done Testing"
print score
错误:
line 366, in fit
builder.build(self.tree_, X, y, sample_weight, X_idx_sorted)
File "sklearn/tree/_tree.pyx", line 145, in sklearn.tree._tree.DepthFirstTreeBuilder.build
File "sklearn/tree/_tree.pyx", line 244, in sklearn.tree._tree.DepthFirstTreeBuilder.build
File "sklearn/tree/_tree.pyx", line 735, in sklearn.tree._tree.Tree._add_node
File "sklearn/tree/_tree.pyx", line 707, in sklearn.tree._tree.Tree._resize_c
File "sklearn/tree/_utils.pyx", line 39, in sklearn.tree._utils.safe_realloc
MemoryError: could not allocate 10206838784 bytes
最佳答案
来自 scikit-learn 文档:“控制树大小的参数的默认值(例如 max_depth、min_samples_leaf 等)会导致完全生长且未修剪的树,这些树可能会非常大一些数据集。为了减少内存消耗,应该通过设置这些参数值来控制树的复杂性和大小。”
然后我会尝试调整这些参数。另外,你可以尝试一下内存。如果您的计算机 RAM 太少,请尝试在 GoogleCollaborator 上运行它。
关于python - sklearn随机森林分类python中的内存分配错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53526382/