最佳答案
让我首先提出一些一般准则:
如前所述,Euclidean Metric未能找到正确的距离,因为它试图获取普通直线距离。
因此,如果我们有 多维 变量空间,那么两个点可能看起来与 均值 的距离相同,但其中一个距离数据云很远(即它是一个异常值)。
解决方案是 Mahalanobis Distance,它通过采用变量的 Eigenvectors 而不是原始轴来制作类似于特征缩放的东西。
它应用以下公式:
在哪里:
x
是 观察 以求其距离; m
是观测值的平均; S
是Covariance Matrix。 刷新器:
Covariance 表示两个变量之间关系的方向(即正、负或零),因此它显示了一个变量与其他变量的变化相关的强度。
执行
考虑这个 6x3 数据集,其中每一行代表一个样本,每列代表给定样本的一个特征:
首先,我们需要为每个样本的 特征 创建一个协方差矩阵,这就是为什么我们在 numpy.cov 函数中将参数
rowvar
设置为 False
的原因,所以现在每列代表一个变量:covariance_matrix = np.cov(data, rowvar=False)
# data here looks similar to the above table
# in the picture
接下来,我们找到协方差矩阵的 逆 :inv_covariance_matrix = np.linalg.inv(covariance_matrix)
但是,在继续操作之前,如上所述,我们应该检查矩阵及其逆矩阵是否为对称和正定。我们使用此Cholesky Decomposition算法,幸运的是,它已经在numpy.linalg.cholesky中实现:def is_pos_def(A):
if np.allclose(A, A.T):
try:
np.linalg.cholesky(A)
return True
except np.linalg.LinAlgError:
return False
else:
return False
然后,我们找到每个特征上变量的平均 m
(我应该说维度)并将它们保存在这样的数组中:vars_mean = []
for i in range(data.shape[0]):
vars_mean.append(list(data.mean(axis=0)))
# axis=0 means each column in the 2D array
请注意,我重复每一行只是为了利用矩阵减法,如下所示。
接下来,我们找到
x - m
(即差分),但由于我们已经有了矢量化的 vars_mean
,我们需要做的就是:diff = data - vars_mean
# here we subtract the mean of feature
# from each feature of each example
最后,像这样应用公式:
md = []
for i in range(len(diff)):
md.append(np.sqrt(diff[i].dot(inv_covariance_matrix).dot(diff[i])))
请注意以下事项:number_of_features x number_of_features
diff
矩阵的维度与原始数据矩阵类似:number_of_examples x number_of_features
diff[i]
(即行)都是 1 x number_of_features
。 diff[i].dot(inv_covariance_matrix)
的结果矩阵将是 1 x number_of_features
;当我们再次乘以 diff[i]
时; numpy
自动将后者视为列矩阵(即 number_of_features x 1
);所以最终结果将成为一个单一的值(即不需要转置)。 为了检测异常值,我们应该指定一个
threshold
;我们通过将马氏距离结果的平均值乘以极端度 k
来实现;其中 k = 2.0 * std
表示极值,3.0 * std
表示非常极端值 ;这是根据 68–95–99.7 rule (来自同一链接的插图):放在一起
import numpy as np
def create_data(examples=50, features=5, upper_bound=10, outliers_fraction=0.1, extreme=False):
'''
This method for testing (i.e. to generate a 2D array of data)
'''
data = []
magnitude = 4 if extreme else 3
for i in range(examples):
if (examples - i) <= round((float(examples) * outliers_fraction)):
data.append(np.random.poisson(upper_bound ** magnitude, features).tolist())
else:
data.append(np.random.poisson(upper_bound, features).tolist())
return np.array(data)
def MahalanobisDist(data, verbose=False):
covariance_matrix = np.cov(data, rowvar=False)
if is_pos_def(covariance_matrix):
inv_covariance_matrix = np.linalg.inv(covariance_matrix)
if is_pos_def(inv_covariance_matrix):
vars_mean = []
for i in range(data.shape[0]):
vars_mean.append(list(data.mean(axis=0)))
diff = data - vars_mean
md = []
for i in range(len(diff)):
md.append(np.sqrt(diff[i].dot(inv_covariance_matrix).dot(diff[i])))
if verbose:
print("Covariance Matrix:\n {}\n".format(covariance_matrix))
print("Inverse of Covariance Matrix:\n {}\n".format(inv_covariance_matrix))
print("Variables Mean Vector:\n {}\n".format(vars_mean))
print("Variables - Variables Mean Vector:\n {}\n".format(diff))
print("Mahalanobis Distance:\n {}\n".format(md))
return md
else:
print("Error: Inverse of Covariance Matrix is not positive definite!")
else:
print("Error: Covariance Matrix is not positive definite!")
def MD_detectOutliers(data, extreme=False, verbose=False):
MD = MahalanobisDist(data, verbose)
# one popular way to specify the threshold
#m = np.mean(MD)
#t = 3. * m if extreme else 2. * m
#outliers = []
#for i in range(len(MD)):
# if MD[i] > t:
# outliers.append(i) # index of the outlier
#return np.array(outliers)
# or according to the 68–95–99.7 rule
std = np.std(MD)
k = 3. * std if extreme else 2. * std
m = np.mean(MD)
up_t = m + k
low_t = m - k
outliers = []
for i in range(len(MD)):
if (MD[i] >= up_t) or (MD[i] <= low_t):
outliers.append(i) # index of the outlier
return np.array(outliers)
def is_pos_def(A):
if np.allclose(A, A.T):
try:
np.linalg.cholesky(A)
return True
except np.linalg.LinAlgError:
return False
else:
return False
data = create_data(15, 3, 10, 0.1)
print("data:\n {}\n".format(data))
outliers_indices = MD_detectOutliers(data, verbose=True)
print("Outliers Indices: {}\n".format(outliers_indices))
print("Outliers:")
for ii in outliers_indices:
print(data[ii])
结果
data:
[[ 12 7 9]
[ 9 16 7]
[ 14 11 10]
[ 14 5 5]
[ 12 8 7]
[ 8 8 10]
[ 9 14 8]
[ 12 12 10]
[ 18 10 6]
[ 6 12 11]
[ 4 12 15]
[ 5 13 10]
[ 8 9 8]
[106 116 97]
[ 90 116 114]]
Covariance Matrix:
[[ 980.17142857 1143.62857143 1035.6 ]
[1143.62857143 1385.11428571 1263.12857143]
[1035.6 1263.12857143 1170.74285714]]
Inverse of Covariance Matrix:
[[ 0.03021777 -0.03563241 0.0117146 ]
[-0.03563241 0.08684092 -0.06217448]
[ 0.0117146 -0.06217448 0.05757261]]
Variables Mean Vector:
[[21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8], [21.8, 24.6, 21.8]]
Variables - Variables Mean Vector:
[[ -9.8 -17.6 -12.8]
[-12.8 -8.6 -14.8]
[ -7.8 -13.6 -11.8]
[ -7.8 -19.6 -16.8]
[ -9.8 -16.6 -14.8]
[-13.8 -16.6 -11.8]
[-12.8 -10.6 -13.8]
[ -9.8 -12.6 -11.8]
[ -3.8 -14.6 -15.8]
[-15.8 -12.6 -10.8]
[-17.8 -12.6 -6.8]
[-16.8 -11.6 -11.8]
[-13.8 -15.6 -13.8]
[ 84.2 91.4 75.2]
[ 68.2 91.4 92.2]]
Mahalanobis Distance:
[1.3669401667524865, 2.1796331318432967, 0.7470525416547134, 1.6364973119931507, 0.8351423113609481, 0.9128858131134882, 1.397144258271586, 0.35603382066414996, 1.4449501739129382, 0.9668775289588046, 1.490503433100514, 1.4021488309805878, 0.4500345257064412, 3.239353067840299, 3.260149280200771]
Outliers Indices: [13 14]
Outliers:
[106 116 97]
[ 90 116 114]
关于python - 使用 Mahalanobis 距离进行多元异常值去除,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46827580/