amazon-web-services - AWS Glue 作业 - 将 CSV 转换为 Parquet

标签 amazon-web-services apache-spark parquet aws-glue

我正在尝试使用 AWS Glue 将大约 1.5 GB 的 GZIPPED CSV 转换为 Parquet。下面的脚本是一个自动生成的 Glue 作业来完成该任务。似乎需要很长时间(我已经等了 10 个 DPU,但从未看到它结束或产生任何输出数据)

我想知道是否有人有将 1.5 GB + GZIPPED CSV 转换为 Parquet 的经验 - 有没有更好的方法来完成这种转换?

我有 TB 的数据要转换。令人担忧的是,转换 GB 似乎需要很长时间。

我的胶水作业日志有数千个条目,例如:

18/03/02 20:20:20 DEBUG Client: 
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.58.225
ApplicationMaster RPC port: 0
queue: default
start time: 1520020335454
final status: UNDEFINED
tracking URL: http://ip-172-31-51-199.ec2.internal:20888/proxy/application_1520020149832_0001/
user: root

AWS 自动生成的胶水作业代码:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "test_datalake_db", table_name = "events2_2017_test", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test_datalake_db", table_name = "events2_2017_test", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("sys_vortex_id", "string", "sys_vortex_id", "string"), ("sys_app_id", "string", "sys_app_id", "string"), ("sys_pq_id", "string", "sys_pq_id", "string"), ("sys_ip_address", "string", "sys_ip_address", "string"), ("sys_submitted_at", "string", "sys_submitted_at", "string"), ("sys_received_at", "string", "sys_received_at", "string"), ("device_id_type", "string", "device_id_type", "string"), ("device_id", "string", "device_id", "string"), ("timezone", "string", "timezone", "string"), ("online", "string", "online", "string"), ("app_version", "string", "app_version", "string"), ("device_days", "string", "device_days", "string"), ("device_sessions", "string", "device_sessions", "string"), ("event_id", "string", "event_id", "string"), ("event_at", "string", "event_at", "string"), ("event_date", "string", "event_date", "string"), ("int1", "string", "int1", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("sys_vortex_id", "string", "sys_vortex_id", "string"), ("sys_app_id", "string", "sys_app_id", "string"), ("sys_pq_id", "string", "sys_pq_id", "string"), ("sys_ip_address", "string", "sys_ip_address", "string"), ("sys_submitted_at", "string", "sys_submitted_at", "string"), ("sys_received_at", "string", "sys_received_at", "string"), ("device_id_type", "string", "device_id_type", "string"), ("device_id", "string", "device_id", "string"), ("timezone", "string", "timezone", "string"), ("online", "string", "online", "string"), ("app_version", "string", "app_version", "string"), ("device_days", "string", "device_days", "string"), ("device_sessions", "string", "device_sessions", "string"), ("event_id", "string", "event_id", "string"), ("event_at", "string", "event_at", "string"), ("event_date", "string", "event_date", "string"), ("int1", "string", "int1", "string")], transformation_ctx = "applymapping1")
## @type: ResolveChoice
## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
## @return: resolvechoice2
## @inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
## @type: DropNullFields
## @args: [transformation_ctx = "dropnullfields3"]
## @return: dropnullfields3
## @inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://devops-redshift*****/prd/parquet"}, format = "parquet", transformation_ctx = "datasink4"]
## @return: datasink4
## @inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://devops-redshift*****/prd/parquet"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

最佳答案

是的,我最近发现 Spark DataFrames - 与 Glue 的 DynamicFrames - 是明显更快的方式。

# boiler plate, generated code
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

# some job-specific variables
compression_type = 'snappy' # 'snappy', 'gzip', or 'none'
source_path = 's3://source-bucket/part1=x/part2=y/'
destination_path = 's3://destination-bucket/part1=x/part2=y/'

# CSV to Parquet conversion
df = spark.read.option('delimiter','|').option('header','true').csv(source_path)
df.write.mode("overwrite").format('parquet').option('compression', compression_type).save(destination_path )
job.commit()

关于amazon-web-services - AWS Glue 作业 - 将 CSV 转换为 Parquet,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49076794/

相关文章:

amazon-web-services - 如何在 AWS CDK 中使用 CloudFrontWebDistribution 启用 SecurityHeaders 的托管响应 header 策略?

python - 在 AWS EC2 上使用 IDE

php - 访问Docker容器(如URL)

hadoop - yarn - Spark 工作的执行者

apache-spark - PySpark:无法使用日期时间年 = 0001 进行列操作

hadoop - 损坏的 Parquet 文件

java - 使用 Java 从 S3 检索文件并将它们放入 EC2 Linux 实例

apache-spark - Apache Spark/Azure Data Lake Storage - 仅处理一次文件,将文件标记为已处理

apache-spark - Apache Spark拼花地板: Cannot build an empty group

hive - 如何在配置单元 0.13+ 中为 Parquet 数据指定模式