公平地说,我要做的就是让metricbeat将系统统计信息发送到elasticsearch并在kibana上查看它们。
我通读了Elasticsearch文档,试图寻找线索。
由于我的实际应用程序是用python编写的,因此我将图像基于python,最终目标是将所有日志(通过metricbeat的sys stats,通过filebeat的应用程序日志)发送到elastic。
我似乎找不到一种在容器内将logstash作为服务运行的方法。
我的dockerfile:
FROM python:2.7
WORKDIR /var/local/myapp
COPY . /var/local/myapp
# logstash
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN apt-get update && apt-get install apt-transport-https dnsutils default-jre apt-utils -y
RUN echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-5.x.list
RUN apt-get update && apt-get install logstash
# metricbeat
#RUN wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.6.0-amd64.deb
RUN dpkg -i metricbeat-5.6.0-amd64.deb
RUN pip install --no-cache-dir -r requirements.txt
RUN apt-get autoremove -y
CMD bash strap_and_run.sh
和额外的脚本
strap_and_run.sh
:python finalize_config.py
# start
echo "starting logstash..."
systemctl start logstash.service
#todo :get my_ip
echo "starting metric beat..."
/etc/init.d/metricbeat start
finalize_config.py
import os
import requests
LOGSTASH_PIPELINE_FILE = 'logstash_pipeline.conf'
LOGSTASH_TARGET_PATH = '/etc/logstach/conf.d'
METRICBEAT_FILE = 'metricbeat.yml'
METRICBEAT_TARGET_PATH = os.path.join(os.getcwd, '/metricbeat-5.6.0-amd64.deb')
my_ip = requests.get("https://api.ipify.org/").content
ELASTIC_HOST = os.environ.get('ELASTIC_HOST')
ELASTIC_USER = os.environ.get('ELASTIC_USER')
ELASTIC_PASSWORD = os.environ.get('ELASTIC_PASSWORD')
if not os.path.exists(os.path.join(LOGSTASH_TARGET_PATH)):
os.makedirs(os.path.join(LOGSTASH_TARGET_PATH))
# read logstash template file
with open(LOGSTASH_PIPELINE_FILE, 'r') as logstash_f:
lines = logstash_f.readlines()
new_lines = []
for line in lines:
new_lines.append(line
.replace("<elastic_host>", ELASTIC_HOST)
.replace("<elastic_user>", ELASTIC_USER)
.replace("<elastic_password>", ELASTIC_PASSWORD))
# write current file
with open(os.path.join(LOGSTASH_TARGET_PATH, LOGSTASH_PIPELINE_FILE), 'w+') as new_logstash_f:
new_logstash_f.writelines(new_lines)
if not os.path.exists(os.path.join(METRICBEAT_TARGET_PATH)):
os.makedirs(os.path.join(METRICBEAT_TARGET_PATH))
# read metricbeath template file
with open(METRICBEAT_FILE, 'r') as metric_f:
lines = metric_f.readlines()
new_lines = []
for line in lines:
new_lines.append(line
.replace("<ip-field>", my_ip)
.replace("<type-field>", "test"))
# write current file
with open(os.path.join(METRICBEAT_TARGET_PATH, METRICBEAT_FILE), 'w+') as new_metric_f:
new_metric_f.writelines(new_lines)
最佳答案
原因是容器内没有初始化系统。因此,您不应使用service
或systemctl
。因此,您应该自己在后台启动该过程。您更新的脚本如下所示
python finalize_config.py
# start
echo "starting logstash..."
/usr/bin/logstash &
#todo :get my_ip
echo "starting metric beat..."
/usr/bin/metric start &
wait
您还需要添加对TERM和其他信号的处理,并终止子进程。如果您不这样做,
docker stop
将没有什么问题。在这种情况下,我更喜欢使用诸如supervisor和runsupervisor之类的流程管理器作为主要PID 1。
关于docker - 在docker容器中将logstash作为dameon运行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46201243/