最近的一个类似问题 ( isinstance(foo, types.GeneratorType) or inspect.isgenerator(foo)?) 让我很好奇如何通用地实现它。
这似乎是一个普遍有用的东西,实际上,拥有一个生成器类型的对象,它将第一次缓存通过(如 itertools.cycle
),报告 StopIteration,然后返回下次通过缓存中的项目,但如果对象不是生成器(即本身支持 O(1) 查找的列表或字典),则不缓存,并且具有相同的行为,但对于原始列表.
可能性:
1) 修改itertools.cycle。它看起来像这样:
def cycle(iterable):
saved = []
try:
saved.append(iterable.next())
yield saved[-1]
isiter = True
except:
saved = iterable
isiter = False
# cycle('ABCD') --> A B C D A B C D A B C D ...
for element in iterable:
yield element
if isiter:
saved.append(element)
# ??? What next?
如果我可以重新启动生成器,那将是完美的——我可以发回一个 StopIteration,然后在下一个 gen.next() 上返回条目 0,即 `A B C D StopIteration A B C D StopIteration' 但它看起来不像这实际上是可能的。
第二个是一旦 StopIteration 被命中,然后保存有一个缓存。但看起来没有任何方法可以到达内部 saved[] 字段。也许是这个的类版本?
2) 或者我可以直接传入列表:
def cycle(iterable, saved=[]):
saved.clear()
try:
saved.append(iterable.next())
yield saved[-1]
isiter = True
except:
saved = iterable
isiter = False
# cycle('ABCD') --> A B C D A B C D A B C D ...
for element in iterable:
yield element
if isiter:
saved.append(element)
mysaved = []
myiter = cycle(someiter, mysaved)
但这看起来很讨厌。在 C/++ 中,我可以传递一些引用,并将实际引用更改为已保存以指向可迭代——你实际上不能在 python 中这样做。所以这根本行不通。
其他选择?
编辑:更多数据。 CachingIterable 方法似乎太慢而无法发挥作用,但它确实将我推向了一个可行的方向。它比天真的方法(转换为列出我自己)稍微慢一点,但如果它已经是可迭代的,它似乎不会受到影响。
一些代码和数据:
def cube_generator(max=100):
i = 0
while i < max:
yield i*i*i
i += 1
# Base case: use generator each time
%%timeit
cg = cube_generator(); [x for x in cg]
cg = cube_generator(); [x for x in cg]
cg = cube_generator(); [x for x in cg]
10000 loops, best of 3: 55.4 us per loop
# Fastest case: flatten to list, then iterate
%%timeit
cg = cube_generator()
cl = list(cg)
[x for x in cl]
[x for x in cl]
[x for x in cl]
10000 loops, best of 3: 27.4 us per loop
%%timeit
cg = cube_generator()
ci2 = CachingIterable(cg)
[x for x in ci2]
[x for x in ci2]
[x for x in ci2]
1000 loops, best of 3: 239 us per loop
# Another attempt, which is closer to the above
# Not exactly the original solution using next, but close enough i guess
class CacheGen(object):
def __init__(self, iterable):
if isinstance(iterable, (list, tuple, dict)):
self._myiter = iterable
else:
self._myiter = list(iterable)
def __iter__(self):
return self._myiter.__iter__()
def __contains__(self, key):
return self._myiter.__contains__(key)
def __getitem__(self, key):
return self._myiter.__getitem__(key)
%%timeit
cg = cube_generator()
ci = CacheGen(cg)
[x for x in ci]
[x for x in ci]
[x for x in ci]
10000 loops, best of 3: 30.5 us per loop
# But if you start with a list, it is faster
cg = cube_generator()
cl = list(cg)
%%timeit
[x for x in cl]
[x for x in cl]
[x for x in cl]
100000 loops, best of 3: 11.6 us per loop
%%timeit
ci = CacheGen(cl)
[x for x in ci]
[x for x in ci]
[x for x in ci]
100000 loops, best of 3: 13.5 us per loop
有没有更快的方法可以更接近“纯”循环?
最佳答案
您想要的不是迭代器,而是可迭代对象。迭代器只能迭代一次其内容。你想要一个带有迭代器的东西,然后你可以在它上面迭代多次,从迭代器中产生相同的值,即使迭代器不记得它们,就像一个生成器。那么这只是对那些不需要缓存的输入进行特殊封装的问题。这是一个非线程安全的示例(编辑:为提高效率而更新):
import itertools
class AsYouGoCachingIterable(object):
def __init__(self, iterable):
self.iterable = iterable
self.iter = iter(iterable)
self.done = False
self.vals = []
def __iter__(self):
if self.done:
return iter(self.vals)
#chain vals so far & then gen the rest
return itertools.chain(self.vals, self._gen_iter())
def _gen_iter(self):
#gen new vals, appending as it goes
for new_val in self.iter:
self.vals.append(new_val)
yield new_val
self.done = True
还有一些时间安排:
class ListCachingIterable(object):
def __init__(self, obj):
self.vals = list(obj)
def __iter__(self):
return iter(self.vals)
def cube_generator(max=1000):
i = 0
while i < max:
yield i*i*i
i += 1
def runit(iterable_factory):
for i in xrange(5):
for what in iterable_factory():
pass
def puregen():
runit(lambda: cube_generator())
def listtheniter():
res = list(cube_generator())
runit(lambda: res)
def listcachingiterable():
res = ListCachingIterable(cube_generator())
runit(lambda: res)
def asyougocachingiterable():
res = AsYouGoCachingIterable(cube_generator())
runit(lambda: res)
结果是:
In [59]: %timeit puregen()
1000 loops, best of 3: 774 us per loop
In [60]: %timeit listtheniter()
1000 loops, best of 3: 345 us per loop
In [61]: %timeit listcachingiterable()
1000 loops, best of 3: 348 us per loop
In [62]: %timeit asyougocachingiterable()
1000 loops, best of 3: 630 us per loop
因此,就类而言,最简单的方法 ListCachingIterable
与手动执行 list
一样有效。 “as-you-go”变体的速度几乎是原来的两倍,但如果您不使用整个列表,它就有优势,例如假设您只是在寻找第一个超过 100 的立方体:
def first_cube_past_100(cubes):
for cube in cubes:
if cube > 100:
return cube
raise Error("No cube > 100 in this iterable")
然后:
In [76]: %timeit first_cube_past_100(cube_generator())
100000 loops, best of 3: 2.92 us per loop
In [77]: %timeit first_cube_past_100(ListCachingIterable(cube_generator()))
1000 loops, best of 3: 255 us per loop
In [78]: %timeit first_cube_past_100(AsYouGoCachingIterable(cube_generator()))
100000 loops, best of 3: 10.2 us per loop
关于python - 缓存生成器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19503455/