下面是我运行来调用 sqoop 的 python 代码,但是除了下面几行之外,这并没有捕获日志
警告:/usr/hdp/2.6.4.0-91/accumulo 不存在! Accumulo 导入将会失败。 请将 $ACCUMULO_HOME 设置为 Accumulo 安装的根目录。
import subprocess
job = "sqoop-import --direct --connect 'jdbc:sqlserver://host' --username myuser --password-file /user/ivr_sqoop --table data_app_det --delete-target-dir --verbose --split-by attribute_name_id --where \"db_process_time BETWEEN ('2018-07-15') and ('9999-12-31')\""
print job
with open('save.txt','w') as fp:
proc = subprocess.Popen(job, stdout=fp, stderr=subprocess.PIPE, shell=True)
stdout, stderr = proc.communicate()
print "Here is the return code :: " + str(proc.returncode)
print stdout`
如果我的通话方式有问题,请告诉我。
注意:单个 sqoop cmd 运行良好并生成所有日志。
我也尝试了下面的方法,结果是一样的
import subprocess
job = "sqoop-import --direct --connect 'jdbc:sqlserver://host' --username myuser --password-file /user/ivr_sqoop --table data_app_det --delete-target-dir --verbose --split-by attribute_name_id --where \"db_process_time BETWEEN ('2018-07-15') and ('9999-12-31')\""
proc = subprocess.Popen(job, stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
stdout, stderr = proc.communicate()
并且还在 cmd 末尾使用“2> mylog.log”
import subprocess
job = "sqoop-import --direct --connect 'jdbc:sqlserver://host' --username myuser --password-file /user/ivr_sqoop --table data_app_det --delete-target-dir --verbose --split-by attribute_name_id --where \"db_process_time BETWEEN ('2018-07-15') and ('9999-12-31')\" > mylog.log "
proc = subprocess.Popen(job, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = proc.communicate()
我发现了以下类似的问题,但也没有答案。
Subprocess Popen : Ignore Accumulo warning and continue execution of Sqoop
最佳答案
由于您添加了 shell=True
,因此它不会捕获 Sqoop 日志。请从命令中删除 shell=True
并添加 universal_newlines=True
,它将显示控制台日志。
工作代码:
import subprocess
import logging
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
# Function to run Hadoop command
def run_unix_cmd(args_list):
"""
run linux commands
"""
print('Running system command: {0}'.format(' '.join(args_list)))
proc = subprocess.Popen(args_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
s_output, s_err = proc.communicate()
s_return = proc.returncode
return s_return, s_output, s_err
# Create Sqoop Job
def sqoop_job():
"""
Create Sqoop job
"""
cmd = ['sqoop', 'import', '--connect', 'jdbc:oracle:thin:@//host:port/schema', '--username', 'user','--password', 'XX', '--query', '"your query"', '-m', '1', '--target-dir', 'tgt_dir']
print(cmd)
(ret, out, err) = run_unix_cmd(cmd)
print(ret, out, err)
if ret == 0:
logging.info('Success.')
else:
logging.info('Error.')
if __name__ == '__main__':
sqoop_job()
关于python - 使用 stdout Popen 捕获 sqoop 日志,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51424854/