python - 使用 Numpy 实现 PCA

标签 python numpy machine-learning pca

我想使用类似于 sklearn 中的类来实现 PCA。

我寻找具有 k 个主成分的 PCA 的算法如下:

  • 计算样本均值并平移数据集,使其以原点为中心。
  • 计算新的翻译集的协方差矩阵。
  • 求出特征值和特征向量,并按降序对它们进行排序。
  • 将数据集投影到由前 k 个特征向量跨越的向量空间上。
import numpy as np


class MyPCA:
    def __init__(self, n_components):
        self.n_components = n_components

    def fit_transform(self, X):
        """
        Assumes observations in X are passed as rows of a numpy array.
        """

        # Translate the dataset so it's centered around 0
        translated_X = X - np.mean(X, axis=0)

        # Calculate the eigenvalues and eigenvectors of the covariance matrix
        e_values, e_vectors = np.linalg.eigh(np.cov(translated_X.T))

        # Sort eigenvalues and their eigenvectors in descending order
        e_ind_order = np.flip(e_values.argsort())
        e_values = e_values[e_ind_order]
        e_vectors = e_vectors[e_ind_order]

        # Save the first n_components eigenvectors as principal components
        principal_components = np.take(e_vectors, np.arange(self.n_components), axis=0)

        return np.matmul(translated_X, principal_components.T)

但是,当在 Iris 数据集上运行时,此实现产生的结果与 sklearn 的结果截然不同,并且结果并未显示数据中存在三个不同的组:

from sklearn import datasets
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt


def plot_pca_results(pca_class, dataset, plot_title):
    X = dataset.data
    y = dataset.target
    y_names = dataset.target_names

    pca = pca_class(n_components=1)
    B = pca.fit_transform(X)
    B = np.concatenate([B, np.zeros_like(B)], 1)

    scatter = plt.scatter(B[:, 0], B[:, 1], c=y)
    scatter_objects, _ = scatter.legend_elements()
    plt.title(plot_title)
    plt.legend(scatter_objects, y_names, loc="lower left", title="Classes")
    plt.show()


dataset = datasets.load_iris()
plot_pca_results(MyPCA, dataset, "Iris - my PCA")
plot_pca_results(PCA, dataset, "Iris - Sklearn")

Results of my PCA Intended results

造成这种差异的原因是什么?我的方法或计算哪里不正确?

最佳答案

比较两种方法

问题在于未标准化数据和特征向量(主轴)的提取。该函数比较这两种方法。

import numpy as np
from sklearn import datasets
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D 

def pca_comparison(X, n_components, labels):
  """X: Standardized dataset, observations on rows
     n_components: dimensionality of the reduced space
     labels: targets, for visualization
  """

  # numpy
  # -----

  # calculate eigen values
  X_cov = np.cov(X.T)
  e_values, e_vectors = np.linalg.eigh(X_cov)

  # Sort eigenvalues and their eigenvectors in descending order
  e_ind_order = np.flip(e_values.argsort())
  e_values = e_values[e_ind_order]
  e_vectors = e_vectors[:, e_ind_order] # note that we have to re-order the columns, not rows

  # now we can project the dataset on to the eigen vectors (principal axes)
  prin_comp_evd = X @ e_vectors

  # sklearn
  # -------

  pca = PCA(n_components=n_components)
  prin_comp_sklearn = pca.fit_transform(X)

  # plotting

  if n_components == 3:
    fig = plt.figure(figsize=(10, 5))
    ax = fig.add_subplot(121, projection='3d')
    ax.scatter(prin_comp_sklearn[:, 0],
                  prin_comp_sklearn[:, 1],
                  prin_comp_sklearn[:, 1],
                  c=labels)
    ax.set_title("sklearn plot")

    ax = fig.add_subplot(122, projection='3d')
    ax.scatter(prin_comp_evd[:, 0],
                  prin_comp_evd[:, 1],
                  prin_comp_evd[:, 2],
                  c=labels)
    ax.set_title("PCA using EVD plot")
    fig.suptitle(f"Plots for reducing to {n_components}-D")
    plt.show()

  elif n_components == 2:
    fig, ax = plt.subplots(1, 2, figsize=(10, 5))
    ax[0].scatter(prin_comp_sklearn[:, 0], 
                prin_comp_sklearn[:, 1],
                c=labels)
    ax[0].set_title("sklearn plot")
    ax[1].scatter(prin_comp_evd[:, 0], 
                prin_comp_evd[:, 1],
                c=labels)
    ax[1].set_title("PCA using EVD plot")
    fig.suptitle(f"Plots for reducing to {n_components}-D")
    plt.show()

  elif n_components == 1:
    fig, ax = plt.subplots(1, 2, figsize=(10, 5))
    ax[0].scatter(prin_comp_sklearn[:, 0], 
                np.zeros_like(prin_comp_sklearn[:, 0]),
                c=labels)
    ax[0].set_title("sklearn plot")
    ax[1].scatter(prin_comp_evd[:, 0], 
                np.zeros_like(prin_comp_evd[:, 0]),
                c=labels)
    ax[1].set_title("PCA using EVD plot")
    fig.suptitle(f"Plots for reducing to {n_components}-D")
    plt.show()

  return prin_comp_sklearn, prin_comp_evd[:, :n_components] 

加载数据集、预处理并运行实验:

dataset = datasets.load_iris()

X = dataset.data
mean = np.mean(X, axis=0)

# this was missing in your implementation
std = np.std(X, axis=0)
X_std = (X - mean) / std

for n in [3, 2, 1]:
  pca_comparison(X_std, n, dataset.target)

结果

enter image description here enter image description here enter image description here

3D 图有点困惑,但是如果您查看 2D 和 1D 情况,您会发现如果我们将第一个主成分乘以 -1,则图是相同的; scikit-learn PCA 实现在底层使用奇异值分解,这将给出非唯一的解决方案 ( see here )。

测试:

使用 here 中的 flip_signs() 函数

def flip_signs(A, B):
    """
    utility function for resolving the sign ambiguity in SVD
    http://stats.stackexchange.com/q/34396/115202
    """
    signs = np.sign(A) * np.sign(B)
    return A, B * signs

for n in [3, 2, 1]:
    sklearn_pca, evd = pca_comparison(X_std, n, dataset.target)
    assert np.allclose(*flip_signs(sklearn_pca, evd))

实现中的问题:

  1. 如果我们查看 iris 数据集中的数据尺度,会发现数据具有不同的尺度。这表明我们应该标准化数据 ( Read here for more )

引用上面答案的一部分:

Continued by @ttnphns

When would one prefer to do PCA (or factor analysis or other similar type of analysis) on correlations (i.e. on z-standardized variables) instead of doing it on covariances (i.e. on centered variables)?

When the variables are different units of measurement. That's clear

...

  • 为了使用 e_values, e_vectors = np.linalg.eigh(X_cov) 获取主轴,您应该提取 e_vectors 列 ( documentation )。您正在提取行。
  • 关于python - 使用 Numpy 实现 PCA,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58666635/

    相关文章:

    python - 检查多行 pandas 中标志列的有效性

    python - 使用 python 和 bs4 抓取后的不同数据

    python - 创建 celery 任务然后同步运行

    multithreading - Numba 是 "only"将我的代码改进了 4 倍。它能做得更好吗?

    python - Vertex AI - 查看管道输出

    python - Pip 卡在 "collecting numpy"

    python - Groupby() 以逗号分隔的单词

    tensorflow - 带有 CART 树的 TensorFlow 随机森林使用什么杂质指数(基尼系数、熵?)?

    machine-learning - Keras中的initial_epoch是什么意思?

    python - 梯度下降基本算法过冲并且在python中不收敛