我想读取一个巨大的 Azure blob 存储文件并将其内容流式传输到 Event-Hub。我找到了这个例子,
from azure.storage.blob import BlockBlobService
bb = BlockBlobService(account_name='', account_key='')
container_name = ""
blob_name_to_download = "test.txt"
file_path ="/home/Adam/Downloaded_test.txt"
bb.get_blob_to_path(container_name, blob_name_to_download, file_path, open_mode='wb',
snapshot=None, start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None)
但是这样,你就无法在循环中获取 block ,而这是我想要做的。那么,如何针对我的情况修改此代码?
最佳答案
如果您注意到,get_blob_to_path
方法中有两个参数 - start_range
和 end_range
。这两个参数将允许您以 block 的形式读取 blob 的数据。
您需要做的是首先获取blob的属性以找到其长度,然后重复调用get_blob_xxx
方法来获取 block 中的数据。我使用了 get_blob_to_text
方法,但您可以看到其他方法 here
.
这是我想出的伪代码。 HTH。
bb = BlockBlobService(account_name='', account_key='')
container_name = ""
blob_name_to_download = "test.txt"
file_path ="/home/Adam/Downloaded_test.txt"
#First get blob properties. We would want to find out blob's content length
blob = bb.get_blob_properties()
#extract content length from blob's properties
blob_size = blob.properties.content_length
#now let's say we want to fetch 1MB chunk at a time so we loop and fetch 1MB content at a time.
start = 0
end = blob_size
chunk_size = 1 * 1024 * 1024 #1MB
do
start_range = start
end_range = start + chunk_size - 1
blob_chunk_content = bb.get_blob_to_text(container_name, blob_name,
encoding='utf-8', snapshot=None, start_range=start_range, end_range=end_range,
validate_content=False, progress_callback=None, max_connections=2,
lease_id=None, if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None)
#blob_chunk_content will have 1 MB data. Do whatever you like with it.
start = end_range + 1
while (start < end)
关于python - 如何逐 block 读取大的 Azure blob 存储文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67902349/