python - 谷歌数据流 : global name is not defined - apache beam

标签 python google-cloud-dataflow apache-beam shapely

在本地我有这个:

from shapely.geometry import Point
<...>
class GeoDataIngestion:
    def parse_method(self, string_input):
       Place = Point(float(values[2]), float(values[3]))
       <...>

我用 python 2.7 运行它,一切顺利

之后,我尝试使用数据流运行器对其进行测试,但在运行时出现此错误:

NameError: global name 'Point' is not defined

管道:

geo_data = (raw_data
                    | 'Geo data transform' >> beam.Map(lambda s: geo_ingestion.parse_method(s))

我读过其他post我认为这应该可行,但我不确定 Google Dataflow 在这方面是否有什么特别之处

我也试过:

import shapely.geometry
<...>
Place = shapely.geometry.Point(float(values[2]), float(values[3]))

同样的结果

NameError: global name 'shapely' is not defined

有什么想法吗?


在谷歌云中,如果我在我的虚拟环境中尝试,我可以毫无问题地做到这一点:

(env) ...@cloudshell:~ ()$ python
Python 2.7.13 (default, Sep 26 2018, 18:42:22)
[GCC 6.3.0 20170516] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from shapely.geometry import Point
Var = Point(-5.020751953125, 39.92237576385941)

额外:


使用 requirements.txt 时出错

Collecting Shapely==1.6.4.post1 (from -r req.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/7d/3c/0f09841db07aabf9cc387662be646f181d07ed196e6f60ce8be5f4a8f0bd/Shapely-1.6.4.post1.tar.gz
  Saved c:\<...>\shapely-1.6.4.post1.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "c:\<...>\temp\pip-download-kpg5ca\Shapely\setup.py", line 80, in <module>
        from shapely._buildcfg import geos_version_string, geos_version, \
      File "shapely\_buildcfg.py", line 200, in <module>
        lgeos = CDLL("geos_c.dll")
      File "C:\Python27\Lib\ctypes\__init__.py", line 366, in __init__
        self._handle = _dlopen(self._name, mode)
    WindowsError: [Error 126] No se puede encontrar el m¢dulo especificado

使用 setup.py 时出错

Setup.py 类似于 this改变:

CUSTOM_COMMANDS = [
    ['apt-get', 'update'],
    ['apt-get', '--assume-yes', 'install', 'libgeos-dev'],
    ['pip', 'install', 'Shapely'],
    ['echo', 'Custom command worked!']
]

结果就像没有安装包一样,因为我从一开始就得到错误:

NameError: global name 'Point' is not defined

setup.py 文件:

from __future__ import absolute_import
from __future__ import print_function
import subprocess
from distutils.command.build import build as _build
import setuptools


class build(_build):  # pylint: disable=invalid-name
  sub_commands = _build.sub_commands + [('CustomCommands', None)]
CUSTOM_COMMANDS = [
    ['apt-get', 'update'],
    ['apt-get', '--assume-yes', 'install', 'libgeos-dev'],
    ['pip', 'install', 'Shapely']]


class CustomCommands(setuptools.Command):  
  def initialize_options(self):
    pass

  def finalize_options(self):
    pass

  def RunCustomCommand(self, command_list):
    print('Running command: %s' % command_list)
    p = subprocess.Popen(
        command_list,
        stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    # Can use communicate(input='y\n'.encode()) if the command run requires
    # some confirmation.
    stdout_data, _ = p.communicate()
    print('Command output: %s' % stdout_data)
    if p.returncode != 0:
      raise RuntimeError(
          'Command %s failed: exit code: %s' % (command_list, p.returncode))

  def run(self):
    for command in CUSTOM_COMMANDS:
      self.RunCustomCommand(command)

REQUIRED_PACKAGES = ['Shapely']

setuptools.setup(
    name='dataflow',
    version='0.0.1',
    description='Dataflow set workflow package.',
    install_requires=REQUIRED_PACKAGES,
    packages=setuptools.find_packages(),
    cmdclass={
        'build': build,
        'CustomCommands': CustomCommands,
        }
    )

管道选项:

 pipeline_options = PipelineOptions()
    pipeline_options.view_as(StandardOptions).streaming = True
    pipeline_options.view_as(SetupOptions).save_main_session = True
    pipeline_options.view_as(SetupOptions).setup_file = 'C:\<...>\setup.py'

    with beam.Pipeline(options=pipeline_options) as p:

电话:

python -m dataflow --project XXX --temp_location gs://YYY --runner DataflowRunner --region europe-west1 --setup_file C:\<...>\setup.py

开始日志:(数据流等待数据前)

INFO:root:Defaulting to the temp_location as staging_location: gs://iotbucketdetector/test/prueba
C:\Users\<...>~1\Desktop\PROYEC~2\env\lib\site-packages\apache_beam\runners\dataflow\dataflow_runner.py:816: DeprecationWarning: options is deprecated since First stable release.. References to <pipeline>.options will
 not be supported
  transform_node.inputs[0].pipeline.options.view_as(StandardOptions))
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/pipeline.pb...
INFO:oauth2client.transport:Attempting refresh to obtain initial access_token
INFO:oauth2client.client:Refreshing access_token
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/pipeline.pb
INFO:root:Executing command: ['C:\\Users\\<...>~1\\Desktop\\PROYEC~2\\env\\Scripts\\python.exe', 'setup.py', 'sdist', '--dist-dir', 'c:\\users\\<...>~1\\appdata\\local\\temp\\tmpakq8bs']
running sdist
running egg_info
writing requirements to dataflow.egg-info\requires.txt
writing dataflow.egg-info\PKG-INFO
writing top-level names to dataflow.egg-info\top_level.txt
writing dependency_links to dataflow.egg-info\dependency_links.txt
reading manifest file 'dataflow.egg-info\SOURCES.txt'
writing manifest file 'dataflow.egg-info\SOURCES.txt'
warning: sdist: standard file not found: should have one of README, README.rst, README.txt, README.md

running check
warning: check: missing required meta-data: url

warning: check: missing meta-data: either (author and author_email) or (maintainer and maintainer_email) must be supplied

creating dataflow-0.0.1
creating dataflow-0.0.1\dataflow.egg-info
copying files to dataflow-0.0.1...
copying setup.py -> dataflow-0.0.1
copying dataflow.egg-info\PKG-INFO -> dataflow-0.0.1\dataflow.egg-info
copying dataflow.egg-info\SOURCES.txt -> dataflow-0.0.1\dataflow.egg-info
copying dataflow.egg-info\dependency_links.txt -> dataflow-0.0.1\dataflow.egg-info
copying dataflow.egg-info\requires.txt -> dataflow-0.0.1\dataflow.egg-info
copying dataflow.egg-info\top_level.txt -> dataflow-0.0.1\dataflow.egg-info
Writing dataflow-0.0.1\setup.cfg
Creating tar archive
removing 'dataflow-0.0.1' (and everything under it)
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/workflow.tar.gz...
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/workflow.tar.gz
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/pickled_main_session...
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/pickled_main_session
INFO:root:Downloading source distribtution of the SDK from PyPi
INFO:root:Executing command: ['C:\\Users\\<...>~1\\Desktop\\PROYEC~2\\env\\Scripts\\python.exe', '-m', 'pip', 'download', '--dest', 'c:\\users\\<...>~1\\appdata\\local\\temp\\tmpakq8bs', 'apache-beam==2.5.0', '--no-d
eps', '--no-binary', ':all:']
Collecting apache-beam==2.5.0
  Using cached https://files.pythonhosted.org/packages/c6/96/56469c57cb043f36bfdd3786c463fbaeade1e8fcf0593ec7bc7f99e56d38/apache-beam-2.5.0.zip
  Saved c:\users\<...>~1\appdata\local\temp\tmpakq8bs\apache-beam-2.5.0.zip
Successfully downloaded apache-beam
INFO:root:Staging SDK sources from PyPI to gs://<...>-1120074505-586000.1542699905.588000/dataflow_python_sdk.tar
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/dataflow_python_sdk.tar...
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/dataflow_python_sdk.tar
INFO:root:Downloading binary distribtution of the SDK from PyPi
INFO:root:Executing command: ['C:\\Users\\<...>~1\\Desktop\\PROYEC~2\\env\\Scripts\\python.exe', '-m', 'pip', 'download', '--dest', 'c:\\users\\<...>~1\\appdata\\local\\temp\\tmpakq8bs', 'apache-beam==2.5.0', '--no-d
eps', '--only-binary', ':all:', '--python-version', '27', '--implementation', 'cp', '--abi', 'cp27mu', '--platform', 'manylinux1_x86_64']
Collecting apache-beam==2.5.0
  Using cached https://files.pythonhosted.org/packages/ff/10/a59ba412f71fb65412ec7a322de6331e19ec8e75ca45eba7a0708daae31a/apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl
  Saved c:\users\<...>~1\appdata\local\temp\tmpakq8bs\apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl
Successfully downloaded apache-beam
INFO:root:Staging binary distribution of the SDK from PyPI to gs://<...>-1120074505-586000.1542699905.588000/apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl
INFO:root:Starting GCS upload to gs://<...>-1120074505-586000.1542699905.588000/apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl...
INFO:root:Completed GCS upload to gs://<...>-1120074505-586000.1542699905.588000/apache_beam-2.5.0-cp27-cp27mu-manylinux1_x86_64.whl
INFO:root:Create job: <Job
 createTime: u'2018-11-20T07:45:28.050865Z'
 currentStateTime: u'1970-01-01T00:00:00Z'
 id: u'2018-11-19_23_45_27-14221834310382472741'
 location: u'europe-west1'
 name: u'beamapp-<...>-1120074505-586000'
 projectId: u'poc-cloud-209212'
 stageStates: []
 steps: []
 tempFiles: []
 type: TypeValueValuesEnum(JOB_TYPE_STREAMING, 2)>

最佳答案

这是因为你需要告诉数据流安装你想要的包。

简要文档是 here .

简单来说,对于像 shapely 这样的 PyPi 包,您可以执行以下操作以确保安装所有依赖项。

  1. pip freeze > requirements.txt
  2. 删除requirements.txt中所有不相关的包
  3. 使用 --requirements_file requirements.txt 运行您的管道

或者甚至更多,如果你想通过 apt-get 或使用你自己的 python 模块来安装 linux 包。看看这个official example .您需要为此设置一个 setup.py 并更改您的管道命令 --setup_file setup.py.

对于 PyPi 模块,使用示例中的 REQUIRED_PACKAGES

REQUIRED_PACKAGES = [
   'numpy','shapely'
]

如果您使用管道选项,则将 setup.py 添加为

pipeline_options = {
        'project': PROJECT,
        'staging_location': 'gs://' + BUCKET + '/staging',
        'runner': 'DataflowRunner',
        'job_name': 'test',
        'temp_location': 'gs://' + BUCKET + '/temp',
        'save_main_session': True,
        'setup_file':'.\setup.py'
    }
options = PipelineOptions.from_dictionary(pipeline_options)
p = beam.Pipeline(options=options)

关于python - 谷歌数据流 : global name is not defined - apache beam,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53337828/

相关文章:

java - 如何从谷歌数据流管道中的多个输入PCollection生成一个输出PCollection?

java - Google Dataflow/Dataprep Shuffle key 太大 (INVALID_ARGUMENT)

python - 在数据流上运行 Apache Beam 管道会引发错误(DirectRunner 运行没有问题)

python - 在 MacOS Catalina 升级后部署 Google Dataflow 模板时,Protobuf 的数据库错误中已存在文件

Python Apache Beam : date value out of range

Python 数据结构内存占用表现怪异

python - Luigi 可以传播异常或返回任何结果吗?

google-bigquery - 数据流作业失败并尝试在 Bigquery 上创建 temp_dataset

来自一系列图像的python 16位灰度视频

python - 读取整个 Numpy 库中的文件