我的数据是一组n个观察到的对及其频率,即每对(xi, yi) 对应了一些ki,次数(xi, yi) 被观察到。理想情况下,我想计算这些对的所有副本的集合的 Kendall tau 和 Spearman rho,其中包含 k1 + k2 + ... + kn 对。问题在于 k1 + k2 + ... + kn,观测值总数,太大了,这样的数据结构无法放入内存。
自然地,我考虑分配第 i 对的频率,ki/(k1 + k< sub>2 + ... + kn),作为其权重,并计算加权集的排名相关性 - 但我找不到任何工具。在我遇到的排名相关的加权变量中(例如 scipy.stats.weightedtau ),权重代表排名而不是对的重要性,这与我的原因无关。 Pearson 的 r 似乎完全具有我需要的权重选项,但它不符合我的目的,因为 x 和 y 没有线性相关。我想知道我是否错过了加权数据点的广义相关性的一些概念。
到目前为止我唯一的想法是缩小 k1, k2, ..., kn 乘以某个公因子 c,因此第 i 对的副本的缩放数量为 [ki/c] (这里 [.] 是舍入运算符,因为我们需要每对都有整数个副本)。通过选择 c 使得 [k1/c] + [k2/c] + ... + [k< sub>n/c] 对可以放入内存中,然后我们可以计算结果集的相关系数 tau 和 rho。但是,ki 和 kj 可能相差多个数量级,因此 c对于某些 ki,i> 可能非常大,因此舍入 ki/c 可能会导致信息丢失。
UPD:可以计算 Spearman 的 rho 以及具有指定频率权重的数据集上的 p 值,如下所示:
def frequency_pearsonr(data, frequencies):
"""
Calculates Pearson's r between columns (variables), given the
frequencies of the rows (observations).
:param data: 2-D array with data
:param frequencies: 1-D array with frequencies
:return: 2-D array with pairwise correlations,
2-D array with pairwise p-values
"""
df = frequencies.sum() - 2
Sigma = np.cov(data.T, fweights=frequencies)
sigma_diag = Sigma.diagonal()
Sigma_diag_pairwise_products = np.multiply.outer(sigma_diag, sigma_diag)
# Calculate matrix with pairwise correlations.
R = Sigma / np.sqrt(Sigma_diag_pairwise_products)
# Calculate matrix with pairwise t-statistics. Main diagonal should
# get 1 / 0 = inf.
with np.errstate(divide='ignore'):
T = R / np.sqrt((1 - R * R) / df)
# Calculate matrix with pairwise p-values.
P = 2 * stats.t.sf(np.abs(T), df)
return R, P
def frequency_rank(data, frequencies):
"""
Ranks 1-D data array, given the frequency of each value. Same
values get same "averaged" ranks. Array with ranks is shaped to
match the input data array.
:param data: 1-D array with data
:param frequencies: 1-D array with frequencies
:return: 1-D array with ranks
"""
s = 0
ranks = np.empty_like(data)
# Compute rank for each unique value.
for value in sorted(set(data)):
index_grid = np.ix_(data == value)
# Find total frequency of the value.
frequency = frequencies[index_grid].sum()
ranks[index_grid] = s + 0.5 * (frequency + 1)
s += frequency
return ranks
def frequency_spearmanrho(data, frequencies):
"""
Calculates Spearman's rho between columns (variables), given the
frequencies of the rows (observations).
:param data: 2-D array with data
:param frequencies: 1-D array with frequencies
:return: 2-D array with pairwise correlations,
2-D array with pairwise p-values
"""
# Rank the columns.
ranks = np.empty_like(data)
for i, data_column in enumerate(data.T):
ranks[:, i] = frequency_rank(data_column, frequencies)
# Compute Pearson's r correlation and p-values on the ranks.
return frequency_pearsonr(ranks, frequencies)
# Columns are variables and rows are observations, whose frequencies
# are specified.
data_col1 = np.array([1, 0, 1, 0, 1])
data_col2 = np.array([.67, .25, .75, .2, .6])
data_col3 = np.array([.1, .3, .8, .3, .2])
data = np.array([data_col1, data_col2, data_col3]).T
frequencies = np.array([2, 4, 1, 3, 2])
# Same data, but with observations (rows) actually repeated instead of
# their frequencies being specified.
expanded_data_col1 = np.array([1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1])
expanded_data_col2 = np.array([.67, .67, .25, .25, .25, .25, .75, .2, .2, .2, .6, .6])
expanded_data_col3 = np.array([.1, .1, .3, .3, .3, .3, .8, .3, .3, .3, .2, .2])
expanded_data = np.array([expanded_data_col1, expanded_data_col2, expanded_data_col3]).T
# Compute Spearman's rho for data in both formats, and compare.
frequency_Rho, frequency_P = frequency_spearmanrho(data, frequencies)
Rho, P = stats.spearmanr(expanded_data)
print(frequency_Rho - Rho)
print(frequency_P - P)
上面的特定示例显示两种方法产生相同的相关性和相同的 p 值:
[[ 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[ 1.11022302e-16 0.00000000e+00 -5.55111512e-17]
[ 0.00000000e+00 -5.55111512e-17 0.00000000e+00]]
[[ 0.00000000e+00 -1.35525272e-19 4.16333634e-17]
[ -9.21571847e-19 0.00000000e+00 -5.55111512e-17]
[ 4.16333634e-17 -5.55111512e-17 0.00000000e+00]]
最佳答案
Paul 建议的计算 Kendall tau 的方法是有效的。不过,您不必将排序数组的索引指定为排名,未排序数组的索引同样可以正常工作(如加权 tau 的示例所示)。权重也不需要标准化。
常规(未加权)Kendall tau(在“扩展”数据集上):
stats.kendalltau([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[.25, .25, .25, .25, .2, .2, .2, .667, .667, .75, .6, .6])
KendalltauResult(correlation=0.7977240352174656, pvalue=0.0034446936330652677)
加权 Kendall tau(在数据集上,出现次数作为权重):
stats.weightedtau([1, 0, 1, 0, 1],
[.667, .25, .75, .2, .6],
rank=False,
weigher=lambda r: [2, 4, 1, 3, 2][r],
additive=False)
WeightedTauResult(correlation=0.7977240352174656, pvalue=nan)
现在,由于weightedtau 实现的特殊性,p 值永远不会被计算。我们可以使用最初提供的缩小出现次数的技巧来近似 p 值,但我非常感谢其他方法。对我来说,根据可用内存量来决定算法行为似乎很痛苦。
关于python - Python 中频率权重的排名相关性,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46260215/