python - 为什么 numpy.median 规模如此之大?

标签 python numpy time-complexity

我最近在面试时遇到的一个问题是:

Write a data structure that supports two operations.
1. Adding a number to the structure.
2. Calculating the median.
The operations to add a number and calculate the median must have a minimum time complexity.

我的实现非常简单,基本上保持元素排序,这样添加一个元素的成本是 O(log(n)) 而不是 O(1),但中位数是 O(1) 而不是 O(n*log (n))

我还添加了一个简单的实现,但包含 numpy 数组中的元素:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from random import randint, random
import math
from time import time

class MedianList():
    def __init__(self, initial_values = []):
        self.values = sorted(initial_values)
        self.size = len(initial_values)

    def add_element(self, element):
        index = self.find_pos(self.values, element)
        self.values = self.values[:index] + [element] + self.values[index:]
        self.size += 1

    def find_pos(self, values, element):
        if len(values) == 0: return 0
        index = int(len(values)/2)
        if element > values[index]: 
            return self.find_pos(values[index+1:], element) + index +  1
        if element < values[index]:
            return self.find_pos(values[:index], element)
        if element == values[index]: return index

    def median(self):
        if self.size == 0: return np.nan
        split = math.floor(self.size/2)
        if self.size % 2 == 1:
            return self.values[split]
        try:
            return (self.values[split] + self.values[split-1])/2
        except:
            print(self.values, self.size, split)

class NaiveMedianList():
    def __init__(self, initial_values = []):
        self.values = sorted(initial_values)

    def add_element(self, element):
        self.values.append(element)

    def median(self):
        split = math.floor(len(self.values)/2)
        sorted_values = sorted(self.values)
        if len(self.values) % 2 == 1:
            return sorted_values[split]
        return (sorted_values[split] + sorted_values[split-1])/2

class NumpyMedianList():
    def __init__(self, initial_values = []):
        self.values = np.array(initial_values)

    def add_element(self, element):
        self.values = np.append(self.values, element)

    def median(self):
        return np.median(self.values)

def time_performance(median_list, total_elements = 10**5):
    elements = [randint(0, 100) for _ in range(total_elements)]
    times = []
    start = time()
    for element in elements:
        median_list.add_element(element)
        median_list.median()
        times.append(time() - start)
    return times

ml_times = time_performance(MedianList())
nl_times = time_performance(NaiveMedianList())
npl_times = time_performance(NumpyMedianList())
times = pd.DataFrame()
times['MedianList'] = ml_times
times['NaiveMedianList'] = nl_times
times['NumpyMedianList'] = npl_times
times.plot()
plt.show()

下面是 10^4 个元素的表现: enter image description here

对于 10^5 个元素,朴素的 numpy 实现实际上更快:

enter image description here

我的问题是: 怎么来的?即使 numpy 快一个常数因子,如果它们不保留数组的排序版本,它们的中值函数如何缩放得如此好?

最佳答案

我们可以检查 Numpy 源代码中的median ( source ):

def median(a, axis=None, out=None, overwrite_input=False, keepdims=False):
    ...

    if overwrite_input:
        if axis is None:
            part = a.ravel()
            part.partition(kth)
        else:
            a.partition(kth, axis=axis)
            part = a
    else:
        part = partition(a, kth, axis=axis)

...

关键函数是partition,来自docs , 使用 introselect .正如@zython 评论的那样,这是 Quickselect 的变体。 ,这提供了关键的性能提升。

关于python - 为什么 numpy.median 规模如此之大?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50899486/

相关文章:

python - Dask 工作人员似乎死了,但无法找到工作人员日志来找出原因

python - 安装 nimfa 时出现问题(Python 矩阵分解库)

Python 将 ndarrays 转换为图形

python - 是否 [ :] slice only make shallow copy of a list?

javascript - 这个函数的时间复杂度是O(log n)吗?

python - 正则表达式:最小可能的子字符串匹配

python - ntpath真的能解析类linux路径吗?

python - 如何在 wxPython TextCtrl 中捕获在单独进程中运行的 shell 脚本的输出?

java - 通过伪代码解析大O分析的代码

algorithm - 如何最佳解决这个问题?