我是 Python 代码新手,必须为工作中的项目编写它。如果有人帮助我纠正下面的代码,我将不胜感激。
我有一个包含多个容器的 Azure 存储帐户。我正在使用 HttpTrigger Function 来触发 Azure 数据工厂上的 Azure 函数应用程序。当 ADF 上的管道创建 CSV 文件时,管道应该能够运行函数应用以获取参数文件名,并将 csv 文件转换为 Azure 存储帐户上的 excel 文件。
我在本地成功运行代码,但是当我将其部署到函数应用程序时,出现错误。我已经尝试运行该代码数周但没有成功。
--- __init__.py file
import os, uuid
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
import pandas as pd
from io import StringIO,BytesIO
import azure.functions as func
import xlsxwriter
import logging
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name= "1893_Item_20220206_err.csv" #'"' ,req.params.get('name'),'"'
if name:
connect_str = os.getenv('AzureWebJobsStorage')
blob_service_client = BlobServiceClient.from_connection_string(connect_str)
container_name = "test"
blob_client = blob_service_client.get_blob_client(container=container_name, blob=name)
excel_blob_client = blob_service_client.get_blob_client(container=container_name, blob="1893_Item_20220206_err.xlsx")
blob = blob_client.download_blob()
read_file = pd.read_csv (StringIO(blob.content_as_text()),sep="|")
print(read_file)
output = BytesIO()
writer = pd.ExcelWriter(output, engine='xlsxwriter')
read_file.to_excel (writer)
writer.save()
output.seek(0)
workbook = output.read()
excel_blob_client.upload_blob(workbook, overwrite=True)
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
-- function.json file
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"path":"test/{blobname}.csv",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
最佳答案
查看您的代码,我最好的猜测是您硬编码的 name
参数的值包含 .csv
扩展名,而 name
code> 路径中的参数不会。您可以在定义中将其作为静态字符串:"path":"test/{blobname}.csv"
。
将 csv 附加到 name
参数,或将其从 path
规范中删除。
接下来,path
中的参数名称是 blobname
而不是 name
。
替代解决方案
这个场景可以使用 Azure Blob storage trigger for Azure Functions 来完成。这样,添加的 blob 就可以作为 InputStream
使用。
The Blob storage trigger starts a function when a new or updated blob is detected. The blob contents are provided as input to the function.
这是一个示例(取自 article linked to above
函数.json:
{
"scriptFile": "__init__.py",
"disabled": false,
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "samples-workitems/{name}",
"connection":"MyStorageAccountAppSetting"
}
]
}
和
import logging
import azure.functions as func
def main(myblob: func.InputStream):
logging.info('Python Blob trigger function processed %s', myblob.name)
关于Azure Function 应用程序上的 Python 代码,用于将 csv 文件转换为 Azure 存储帐户上的 Excel,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/71202721/