哪个更快,为什么?或者它们是一样的?答案是否因任何条件(字典大小、数据类型等)而异?
传统的:
for key in dict:
x = dict[key]
x = key
潮人:
for key, value in dict.items():
y = value
y = key
我还没有看到完全相同的副本,但如果有,我很乐意指出它。
最佳答案
事实证明实际上存在数量级的差异。
我不太了解性能测试,但我尝试做的是创建 3 个不同大小的字典,每个较小的字典都是较大字典的子集。然后,我通过两个函数(传统与时髦)运行了所有三个命令。然后我做了 100 次。
dict1、dict2 和 dict3 的字典大小(键值对的数量)分别为 1000、50000、500000。
似乎有显着差异,d.items()
通常更快 而 d.items()
WAY 在较小的词典上更快。这符合预期(Python 通常奖励“pythonic”代码)。
结果:
--d[key]--
dict1 -- mean: 0.0001113555802294286, st. dev: 1.9951038526222054e-05
dict2 -- mean: 0.01669296698019025, st. dev: 0.019088713496142
dict3 -- mean: 0.2553815016898443, st. dev: 0.02778986771642094
--d.items()--
dict1 -- mean: 6.005059978633653e-05, st. dev: 1.1960199272812617e-05
dict2 -- mean: 0.00507106617995305, st. dev: 0.009871762371401046
dict3 -- mean: 0.07369932165958744, st. dev: 0.023440325168927384
提供结果的代码(repl.it):
import timeit
import random
import statistics
def traditional(dicty):
for key in dicty:
x = dicty[key]
x = key
def hipster(dicty):
for key, value in dicty.items():
y = value
y = key
def generate_random_dicts():
random_dict1, random_dict2, random_dict3 = {}, {}, {}
for _ in range(1000):
key = generate_random_str_one_to_ten_chars()
val = generate_random_str_one_to_ten_chars()
random_dict1[key] = val
random_dict2[key] = val
random_dict3[key] = val
for _ in range(49000):
key = generate_random_str_one_to_ten_chars()
val = generate_random_str_one_to_ten_chars()
random_dict2[key] = val
random_dict3[key] = val
for _ in range(450000):
key = generate_random_str_one_to_ten_chars()
val = generate_random_str_one_to_ten_chars()
random_dict3[key] = val
return [random_dict1, random_dict2, random_dict3]
def generate_random_str_one_to_ten_chars():
ret_str = ""
for x in range(random.randrange(1,10,1)):
ret_str += chr(random.randrange(40,126,1))
return ret_str
dict1, dict2, dict3 = generate_random_dicts()
test_dicts = [dict1, dict2, dict3]
times = {}
times['traditional_times'] = {}
times['hipster_times'] = {}
for _ in range(100):
for itr, dictx in enumerate(test_dicts):
start = timeit.default_timer()
traditional(dictx)
end = timeit.default_timer()
time = end - start
try:
times['traditional_times'][f"dict{itr+1}"].append(time)
except KeyError:
times['traditional_times'][f"dict{itr+1}"] = [time]
start = timeit.default_timer()
hipster(dictx)
end = timeit.default_timer()
time = end - start
try:
times['hipster_times'][f"dict{itr+1}"].append(time)
except KeyError:
times['hipster_times'][f"dict{itr+1}"] = [time]
print("--d[key]--")
for x in times['traditional_times'].keys():
ltimes = times['traditional_times'][x]
mean = statistics.mean(ltimes)
stdev = statistics.stdev(ltimes)
print(f"{x} -- mean: {mean}, st. dev: {stdev}\n\n")
print("--d.items()--")
for x in times['hipster_times'].keys():
ltimes = times['hipster_times'][x]
mean = statistics.mean(ltimes)
stdev = statistics.stdev(ltimes)
print(f"{x} -- mean: {mean}, st. dev: {stdev}")
关于python - Python 3 字典迭代中的性能 : dict[key] vs. dict.items(),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53366393/