python - Databricks Autoloader - 列转换 - 列不可迭代

标签 python databricks azure-databricks databricks-autoloader

我正在使用 Azure Databricks Autoloader 将文件从 ADLS Gen 2 处理到 Delta Lake。我按以下方式编写了 Foreach 批处理函数(pyspark):

#Rename incoming dataframe columns
schemadf = transformschema.renameColumns(microBatchDF,fileconfig)

# Apply simple tranformation on schemadf using createOrReplaceTempView
modifieddf = applytransform(schemadf,targettable,targetdatabase,fileconfig)   

# Add audit cols to modifieddf
transformdf = auditlineage.addauditcols(modifieddf,fileconfig,appid

renameColumns 代码

def renameColumns(dataframe, schema):
  str = schema['Schema']
  splitstr = list(str.split(','))   
  for c,n in zip(dataframe.columns,splitstr):
      dataframe=dataframe.withColumnRenamed(c,n)
  return dataframe

applytransform代码

def applytransform(inputdf,targettable,targetdatabase,fileconfig):  
  logger.info('Inside applytransform for Database/Table {}.{}',targetdatabase,targettable)
  inputdf.createOrReplaceTempView("src_to_transform")
  lspark = inputdf._jdf.sparkSession()
  if 'TransformQuery' in fileconfig and fileconfig['TransformQuery'] is not None:
    vsqlscript = fileconfig['TransformQuery']
    df = lspark.sql(vsqlscript)    
    logger.info("Applied Tranform")    
    return df
  else:
    logger.info("Passed DF")
    return inputdf

addauditcols 代码

def addauditcols(inputdf,fileconfig,app_id):
    now = datetime.datetime.now()
    print(type(inputdf))    
    createdby = 'DatabricksJob-'+app_id
    datasource = fileconfig['Datasource']
    recordactiveind = 'Y'
    df = inputdf.withColumn('datasource',lit(datasource)).\
    withColumn('createdtimestamp',lit(now)).\
    withColumn('lastmodifiedtimestamp',lit(now)).\
    withColumn('createduserid',lit(createdby)).\
    withColumn('lastmodifieduserid',lit(createdby)).\
    withColumn('filepath',input_file_name()).\
    withColumn('recordactiveind',lit(recordactiveind))
    return df

applytransform 模块返回 py4j.java_gateway.JavaObject 而不是常规的 pyspark.sql.dataframe.DataFrame ,因此我无法在 addauditcols 模块内的 modifieddf 上执行简单的 withColumn() 类型转换

我得到的错误如下:

2021-12-05 21:09:57.274 | INFO     | __main__:main:73 - modifieddf Type::: 
<class 'py4j.java_gateway.JavaObject'>
2021-12-05 21:09:57.421 | ERROR    | __main__:main:91 - Operating Failed for md_customer, with Exception Column is not iterable
Traceback (most recent call last):

  File "c:/Users/asdsad/integration-app\load2cleansed.py", line 99, in <module>
    main()
    └ <function main at 0x000001C570C263A0>

> File "c:/Users/asdsad/integration-app\load2cleansed.py", line 76, in main
    transformdf = auditlineage.addauditcols(modifieddf,fileconfig,appid)
                  │            │            │          │          └ 'local-1638760184357'
                  │            │            │          └ {'Schema': 'customernumber,customername,addrln1,city,statename,statecode,postalcode,countrycode,activeflag,sourcelastmodified...
                  │            │            └ JavaObject id=o48
                  │            └ <function addauditcols at 0x000001C570B55CA0>
                  └ <module 'core.wrapper.auditlineage' from 'c:\\Users\\asdsad\integration-app\\core\\wrapper\\a...

  File "c:\Users\1232\Documents\Code\ntegration-app\core\wrapper\auditlineage.py", line 30, in addauditcols
    df = inputdf.withColumn('datasource',lit(datasource)).\
         │                               │   └ 'DUMMY-CUST'
         │                               └ <function lit at 0x000001C570B79F70>
         └ JavaObject id=o48

  File "C:\Users\testapp\lib\site-packages\py4j\java_gateway.py", line 1296, in __call__
    args_command, temp_args = self._build_args(*args)
                              │    │            └ ('datasource', Column<'DUMMY-CUST'>)
                              │    └ <function JavaMember._build_args at 0x000001C5704B9280>
                              └ <py4j.java_gateway.JavaMember object at 0x000001C570C5B910>

  File "C:\Users\testapp\lib\site-packages\py4j\java_gateway.py", line 1260, in _build_args
    (new_args, temp_args) = self._get_args(args)
                            │    │         └ ('datasource', Column<'DUMMY-CUST'>)
                            │    └ <function JavaMember._get_args at 0x000001C5704B91F0>
                            └ <py4j.java_gateway.JavaMember object at 0x000001C570C5B910>

  File "C:\Users\testapp\lib\site-packages\py4j\java_gateway.py", line 1247, in _get_args
    temp_arg = converter.convert(arg, self.gateway_client)
               │         │       │    │    └ <py4j.java_gateway.GatewayClient object at 0x000001C5705C89A0>
               │         │       │    └ <py4j.java_gateway.JavaMember object at 0x000001C570C5B910>
               │         │       └ Column<'DUMMY-CUST'>
               │         └ <function ListConverter.convert at 0x000001C5704CE5E0>
               └ <py4j.java_collections.ListConverter object at 0x000001C5704C3FD0>

  File "C:\Users\testapp\lib\site-packages\py4j\java_collections.py", line 510, in convert
    for element in object:
                   └ Column<'DUMMY-CUST'>

  File "C:\Users\testapp\lib\site-packages\pyspark\sql\column.py", line 470, in __iter__
    raise TypeError("Column is not iterable")

TypeError: Column is not iterable

感谢任何帮助

最佳答案

请删除lspark = inputdf._jdf.sparkSession()

它用于像合并一样将sql upsert命令发送到增量而不返回数据帧。

请仅使用spark.sql(vsqlscript)

如果没有帮助,请也分享您的 vsqlscript 代码。

关于python - Databricks Autoloader - 列转换 - 列不可迭代,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/70240583/

相关文章:

azure - 如何从 ADF 中的执行管道获取输出参数?

python - 为什么 urllib 不适用于本地网站?

Python:如何在单独的进程中调用方法

配置了专用链接的 Azure Databricks SCIM 预配

python - 在 Azure Databricks 上运行 Bokeh 服务器?

azure-hdinsight - 在 Azure Databricks 集群上使用 HDInsights SPARK 的优势

python - Python中数据点的平均趋势曲线

python - 使用 python 和 GObject 在 glade 中添加组合框

python - 如何将本地模块导入azure databricks笔记本?

apache-spark - 从 Databricks Notebook 发送带有附件的电子邮件