python - 为什么读取一个字节比从文件中读取 2、3、4……字节慢 20 倍?

标签 python file io benchmarking

我一直试图理解 readseek 之间的权衡。对于小的“跳跃”,读取不需要的数据比使用 seek 跳过它更快。

在对不同的读取/查找 block 大小进行计时以找到临界点时,我遇到了一个奇怪的现象:read(1)read(2) 慢大约 20 倍code>, read(3) 等。这个效果对于不同的读取方法是一样的,例如read()readinto().

为什么会这样?

在计时结果中搜索以下 2/3 行:

2 x buffered 1 byte readinto bytearray

环境:

Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul  5 2016, 11:45:57) [MSC v.1900 32 bit (Intel)]

计时结果:

Non-cachable binary data ingestion (file object blk_size = 8192):
- 2 x buffered 0 byte readinto bytearray:
      robust mean: 6.01 µs +/- 377 ns
      min: 3.59 µs
- Buffered 0 byte seek followed by 0 byte readinto:
      robust mean: 9.31 µs +/- 506 ns
      min: 6.16 µs
- 2 x buffered 4 byte readinto bytearray:
      robust mean: 14.4 µs +/- 6.82 µs
      min: 2.57 µs
- 2 x buffered 7 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.76 µs
      min: 3.08 µs
- 2 x buffered 2 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.77 µs
      min: 3.08 µs
- 2 x buffered 5 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.76 µs
      min: 3.08 µs
- 2 x buffered 3 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.73 µs
      min: 2.57 µs
- 2 x buffered 49 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.72 µs
      min: 2.57 µs
- 2 x buffered 6 byte readinto bytearray:
      robust mean: 14.6 µs +/- 6.76 µs
      min: 3.08 µs
- 2 x buffered 343 byte readinto bytearray:
      robust mean: 15.3 µs +/- 6.43 µs
      min: 3.08 µs
- 2 x buffered 2401 byte readinto bytearray:
      robust mean: 138 µs +/- 247 µs
      min: 4.11 µs
- Buffered 7 byte seek followed by 7 byte readinto:
      robust mean: 278 µs +/- 333 µs
      min: 15.4 µs
- Buffered 3 byte seek followed by 3 byte readinto:
      robust mean: 279 µs +/- 333 µs
      min: 14.9 µs
- Buffered 1 byte seek followed by 1 byte readinto:
      robust mean: 279 µs +/- 334 µs
      min: 15.4 µs
- Buffered 2 byte seek followed by 2 byte readinto:
      robust mean: 279 µs +/- 334 µs
      min: 15.4 µs
- Buffered 4 byte seek followed by 4 byte readinto:
      robust mean: 279 µs +/- 334 µs
      min: 15.4 µs
- Buffered 49 byte seek followed by 49 byte readinto:
      robust mean: 281 µs +/- 336 µs
      min: 14.9 µs
- Buffered 6 byte seek followed by 6 byte readinto:
      robust mean: 281 µs +/- 337 µs
      min: 15.4 µs
- 2 x buffered 1 byte readinto bytearray:
      robust mean: 282 µs +/- 334 µs
      min: 17.5 µs
- Buffered 5 byte seek followed by 5 byte readinto:
      robust mean: 282 µs +/- 338 µs
      min: 15.4 µs
- Buffered 343 byte seek followed by 343 byte readinto:
      robust mean: 283 µs +/- 340 µs
      min: 15.4 µs
- Buffered 2401 byte seek followed by 2401 byte readinto:
      robust mean: 309 µs +/- 373 µs
      min: 15.4 µs
- Buffered 16807 byte seek followed by 16807 byte readinto:
      robust mean: 325 µs +/- 423 µs
      min: 15.4 µs
- 2 x buffered 16807 byte readinto bytearray:
      robust mean: 457 µs +/- 558 µs
      min: 16.9 µs
- Buffered 117649 byte seek followed by 117649 byte readinto:
      robust mean: 851 µs +/- 1.08 ms
      min: 15.9 µs
- 2 x buffered 117649 byte readinto bytearray:
      robust mean: 1.29 ms +/- 1.63 ms
      min: 18 µs

基准代码:

from _utils import BenchmarkResults

from timeit import timeit, repeat
import gc
import os
from contextlib import suppress
from math import floor
from random import randint

### Configuration

FILE_NAME = 'test.bin'
r = 5000
n = 100

reps = 1

chunk_sizes = list(range(7)) + [7**x for x in range(1,7)]

results = BenchmarkResults(description = 'Non-cachable binary data ingestion')


### Setup

FILE_SIZE = int(100e6)

# remove left over test file
with suppress(FileNotFoundError):
    os.unlink(FILE_NAME)

# determine how large a file needs to be to not fit in memory
gc.collect()
try:
    while True:
        data = bytearray(FILE_SIZE)
        del data
        FILE_SIZE *= 2
        gc.collect()
except MemoryError:
    FILE_SIZE *= 2
    print('Using file with {} GB'.format(FILE_SIZE / 1024**3))

# check enough data in file
required_size = sum(chunk_sizes)*2*2*reps*r
print('File size used: {} GB'.format(required_size / 1024**3))
assert required_size <= FILE_SIZE


# create test file
with open(FILE_NAME, 'wb') as file:
    buffer_size = int(10e6)
    data = bytearray(buffer_size)
    for i in range(int(FILE_SIZE / buffer_size)):
        file.write(data)

# read file once to try to force it into system cache as much as possible
from io import DEFAULT_BUFFER_SIZE
buffer_size = 10*DEFAULT_BUFFER_SIZE
buffer = bytearray(buffer_size)
with open(FILE_NAME, 'rb') as file:
    bytes_read = True
    while bytes_read:
        bytes_read = file.readinto(buffer)
    blk_size = file.raw._blksize

results.description += ' (file object blk_size = {})'.format(blk_size)

file = open(FILE_NAME, 'rb')

### Benchmarks

setup = \
"""
# random seek to avoid advantageous starting position biasing results
file.seek(randint(0, file.raw._blksize), 1)
"""

read_read = \
"""
file.read(chunk_size)
file.read(chunk_size)
"""

seek_seek = \
"""
file.seek(buffer_size, 1)
file.seek(buffer_size, 1)
"""

seek_read = \
"""
file.seek(buffer_size, 1)
file.read(chunk_size)
"""

read_read_timings = {}
seek_seek_timings = {}
seek_read_timings = {}
for chunk_size in chunk_sizes:
    read_read_timings[chunk_size] = []
    seek_seek_timings[chunk_size] = []
    seek_read_timings[chunk_size] = []

for j in range(r):
    #file.seek(0)
    for chunk_size in chunk_sizes:
        buffer = bytearray(chunk_size)
        read_read_timings[chunk_size].append(timeit(read_read, setup, number=reps, globals=globals()))
        #seek_seek_timings[chunk_size].append(timeit(seek_seek, setup, number=reps, globals=globals()))
        seek_read_timings[chunk_size].append(timeit(seek_read, setup, number=reps, globals=globals()))

for chunk_size in chunk_sizes:
    results['2 x buffered {} byte readinto bytearray'.format(chunk_size)] = read_read_timings[chunk_size]
    #results['2 x buffered {} byte seek'.format(chunk_size)] = seek_seek_timings[chunk_size]
    results['Buffered {} byte seek followed by {} byte readinto'.format(chunk_size, chunk_size)] = seek_read_timings[chunk_size]


### Cleanup
file.close()
os.unlink(FILE_NAME)

results.show()
results.save()

编辑 2020-02-24:

@finefoot 请求 _utils 包能够运行上面的代码。

from collections import OrderedDict
from math import ceil
from statistics import mean, stdev
from contextlib import suppress
import os
import inspect

class BenchmarkResults(OrderedDict):
    def __init__(self, *args, description='Benchmark Description', **kwArgs):
        self.description = description
        return super(BenchmarkResults, self).__init__(*args, **kwArgs)

    def __repr__(self):
        """Shows the results for the benchmarks in order of ascending duration"""
        characteristic_durations = []
        for name, timings in self.items():
            try:
                characteristic_durations.append(_robust_stats(timings)[0])
            except ValueError:
                if len(timings) > 1:
                    characteristic_durations.append(mean(timings))
                else:
                    characteristic_durations.append(timings[0])
        indx = _argsort(characteristic_durations)
        repr = '{}:\n'.format(self.description)
        items = list(self.items())
        for i in indx:
            name, timings = items[i]
            repr += '- {}:\n'.format(name)
            try:
                stats = _robust_stats(timings)
                repr += '      robust mean: {} +/- {}\n'.format(_units(stats[0]), _units(stats[1]))
            except ValueError:
                repr += '      timings: {}\n'.format(', '.join(map(_units, timings)))
            if len(timings) > 1:
                repr += '      min: {}\n'.format(_units(min(timings)))
        return repr

    def show(self):
        print(self)

    def save(self):
        caller = inspect.stack()[1]
        filename = os.path.splitext(caller.filename)[0] + '.log'
        with open(filename, 'w') as logfile:
            logfile.write(repr(self))


def _units(seconds, significant_figures=3):
    fmt = '{:.%sg} {}' % significant_figures
    if seconds > 1:
        return fmt.format(seconds, 's')
    elif seconds > 1e-3:
        return fmt.format(seconds*1e3, 'ms')
    elif seconds > 1e-6:
        return fmt.format(seconds*1e6, 'µs')
    elif seconds < 1e-6:
        return fmt.format(seconds*1e9, 'ns')
    elif seconds > 60:
        return fmt.format(seconds/60, 'min')
    else:
        return fmt.format(seconds/3600, 'hrs')
    raise ValueError()

def _robust_stats(timings, fraction_to_use=0.8):
    if len(timings) < 5:
        raise ValueError('To calculate a robust mean, you need at least 5 timing results')
    elts_to_prune = int(len(timings) * (1 - fraction_to_use))
    # prune at least the highest and the lowest result
    elts_to_prune = elts_to_prune if elts_to_prune > 2 else 2
    # round to even number --> symmetic pruning
    offset = ceil(elts_to_prune / 2)

    # sort the timings
    timings.sort()
    # prune the required fraction of the elements
    timings = timings[offset:-offset]
    return mean(timings), stdev(timings)

def _argsort(seq):
    # http://stackoverflow.com/questions/3071415/efficient-method-to-calculate-the-rank-vector-of-a-list-in-python
    return sorted(range(len(seq)), key=seq.__getitem__)

if __name__ == '__main__':
    pass

最佳答案

我能够用您的代码重现该问题。但是,我注意到以下内容:您能否验证更换后问题是否消失

file.seek(randint(0, file.raw._blksize), 1)

file.seek(randint(0, file.raw._blksize), 0)

设置中?我认为您可能会在读取 1 个字节期间的某个时间点用完数据。读取 2 字节、3 字节等不会有任何数据要读取,所以速度要快得多。

关于python - 为什么读取一个字节比从文件中读取 2、3、4……字节慢 20 倍?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41625529/

相关文章:

python - 在 Python 中从字符串中删除子字符串

java - 无法从 PC 找到 Android 应用程序创建的文件夹

c++ - 从没有空格的文本文件中读取数字

java - 如何在 Java 中获得主文件夹(或 appdata 文件夹,对于 Windows)的写入权限?

python - 交互式 Python 图形用户界面

python - 相对路径Python错误

python - 如何在 Windows 10 上将 ipynb 文件与 Jupyter Lab(来自 Anaconda)相关联

java - 解析配置时出错java.io.FileNotFoundException :

c - 如何修改 C 上文件中的变量?

python - 如何将数据框信息的输出保存到文件 excel 或文本文件中