尝试使用 python 计算 firestore 集合中的文档数量。当我使用 db.collection('xxxx").stream()
时出现以下错误:
503 The datastore operation timed out, or the data was temporarily unavailable.
快到一半了。它工作正常。这是代码:
docs = db.collection(u'theDatabase').stream()
count = 0
for doc in docs:
count += 1
print (count)
每次我在大约 73,000 条记录时收到 503 错误。有谁知道如何克服 20 秒超时?
最佳答案
尽管 Juan 的回答适用于基本计数,但如果您需要来自 Firebase 的更多数据而不仅仅是 id
(一个常见的用例是未通过的数据的总迁移GCP),递归算法会吃掉你的内存。
所以我采用了 Juan 的代码并将其转换为标准的迭代算法。希望这对某人有帮助。
limit = 1000 # Reduce this if it uses too much of your RAM
def stream_collection_loop(collection, count, cursor=None):
while True:
docs = [] # Very important. This frees the memory incurred in the recursion algorithm.
if cursor:
docs = [snapshot for snapshot in
collection.limit(limit).order_by('__name__').start_after(cursor).stream()]
else:
docs = [snapshot for snapshot in collection.limit(limit).order_by('__name__').stream()]
for doc in docs:
print(doc.id)
print(count)
# The `doc` here is already a `DocumentSnapshot` so you can already call `to_dict` on it to get the whole document.
process_data_and_log_errors_if_any(doc)
count = count + 1
if len(docs) == limit:
cursor = docs[limit-1]
continue
break
stream_collection_loop(db_v3.collection('collection'), 0)
关于python - 如何使用 Python 在 Firestore 中下载大型集合而不会出现 503 错误?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56011623/