我使用此脚本从保存在 AWS S3 存储桶上的 CSV 文件中查询数据。它适用于最初以逗号分隔格式保存的 CSV 文件,但我有大量使用制表符分隔符 (Sep='\t') 保存的数据,这使得代码失败。
原始数据非常庞大,很难重写。有没有办法在我们为 CSV 文件指定定界符/分隔符的情况下查询数据?
我在这篇文章中使用了它:https://towardsdatascience.com/how-i-improved-performance-retrieving-big-data-with-s3-select-2bd2850bc428 ...我要感谢作者的教程,它帮助我节省了大量时间。
代码如下:
import boto3
import os
import pandas as pd
S3_KEY = r'source/df.csv'
S3_BUCKET = 'my_bucket'
TARGET_FILE = 'dataset.csv'
aws_access_key_id= 'my_key'
aws_secret_access_key= 'my_secret'
s3_client = boto3.client(service_name='s3',
region_name='us-east-1',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
query = """SELECT column1
FROM S3Object
WHERE column1 = '4223740573'"""
result = s3_client.select_object_content(Bucket=S3_BUCKET,
Key=S3_KEY,
ExpressionType='SQL',
Expression=query,
InputSerialization={'CSV': {'FileHeaderInfo': 'Use'}},
OutputSerialization={'CSV': {}})
# remove the file if exists, since we append filtered rows line by line
if os.path.exists(TARGET_FILE):
os.remove(TARGET_FILE)
with open(TARGET_FILE, 'a+') as filtered_file:
# write header as a first line, then append each row from S3 select
filtered_file.write('Column1\n')
for record in result['Payload']:
if 'Records' in record:
res = record['Records']['Payload'].decode('utf-8')
filtered_file.write(res)
result = pd.read_csv(TARGET_FILE)
最佳答案
InputSerialization选项还允许您指定:
RecordDelimiter - A single character used to separate individual records in the input. Instead of the default value, you can specify an arbitrary delimiter.
所以你可以尝试:
result = s3_client.select_object_content(
Bucket=S3_BUCKET,
Key=S3_KEY,
ExpressionType='SQL',
Expression=query,
InputSerialization={'CSV': {'FileHeaderInfo': 'Use', 'RecordDelimiter': '\t'}},
OutputSerialization={'CSV': {}})
关于python - 如何使用 S3 Select 和制表符分隔的 csv 文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66772820/