elasticsearch - 如何使用cloudformation模板将两个EC2实例(安装了AMI创建的Elasticsearch)作为多节点?

标签 elasticsearch amazon-ec2 aws-cloudformation aws-cloudformation-custom-resource

我需要使用 AMI 创建两个 Ec2 实例,并使用 CloudFormation 模板将其设置为多节点。 AMI 在其中安装了elasticsearch。我需要将一个作为主节点,另一个作为数据节点。

我的 CF 模板脚本,

AWSTemplateFormatVersion: '2010-09-09'
#Transform: 'AWS::Serverless-2016-10-31'
Description: AWS CloudFormation Template with EC2InstanceWithSecurityGroup
Parameters:
  KeyName:
    Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
    Type: AWS::EC2::KeyPair::KeyName
    ConstraintDescription: must be the name of an existing EC2 KeyPair.
  RemoteAccessLocation:
    Description: The IP address range that can be used to access to the EC2 instances
    Type: String
    MinLength: '9'
    MaxLength: '18'
    Default: 0.0.0.0/0
    AllowedPattern: (\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})
    ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.

Resources:
  ES1EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.2xlarge
      SecurityGroups:
        - !Ref 'InstanceSecurityGroup'
      KeyName: !Ref 'KeyName'
      ImageId: ami-xxxxxxxxxxxxxxxx
      #DependsOn: ES2EC2Instance
      UserData:
        Fn::Base64: !Sub |
              #!/bin/bash -ex

              cat > /etc/elasticsearch/elasticsearch.yml<<EOF1
              network.host: "${EC2_PRIVATE_IP}"
              http.port: 9200
              http.max_content_length: 1gb
              node.name: node-1
              node.roles: [ master, data, ingest ]
              transport.port: 9300-9400
              discovery.seed_hosts: ["${ES1EC2Instance.PrivateIp}", "${ES2EC2Instance.PrivateIp}"]
              cluster.initial_master_nodes: ["node-1"]
              gateway.recover_after_nodes: 2
              EOF1

              ## Restart Elasticsearch
              sudo systemctl restart elasticsearch
  ES2EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.2xlarge
      SecurityGroups:
        - !Ref 'InstanceSecurityGroup'
      KeyName: !Ref 'KeyName'
      ImageId: ami-xxxxxxxxxxxxxxxx
      #DependsOn: ES1EC2Instance
      DependsOn: ES1EC2Instance
      UserData:
        Fn::Base64: !Sub |
              #!/bin/bash -ex

              cat > /etc/elasticsearch/elasticsearch.yml<<EOF1
              network.host: "${ES2EC2Instance.PrivateIp}"
              http.port: 9200
              http.max_content_length: 1gb
              node.name: node-2
              node.roles: [ data, ingest ]
              transport.port: 9300-9400
              discovery.seed_hosts: ["${ES1EC2Instance.PrivateIp}", "${ES2EC2Instance.PrivateIp}"]
              cluster.initial_master_nodes: ["node-1"]
              gateway.recover_after_nodes: 2
              EOF1

              ## Restart Elasticsearch
              sudo systemctl restart elasticsearch

  InstanceSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Enable SSH (22), HTTP (8080),
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: '22'
          ToPort: '22'
          CidrIp: !Ref 'RemoteAccessLocation'
        - CidrIp: 0.0.0.0/0
          FromPort: '8080'
          IpProtocol: tcp
          ToPort: '8080'
        - IpProtocol: tcp
          FromPort: '9200'
          ToPort: '9200'
          CidrIp: !Ref 'RemoteAccessLocation'

Outputs:
  AZ:
    Description: Availability Zone of the newly created EC2 instance for ES
    Value: !GetAtt 'ES1EC2Instance.AvailabilityZone'
  PublicDNS:
    Description: Public DNSName of the newly created EC2 instance for ES
    Value: !GetAtt 'ES1EC2Instance.PublicDnsName'
  PublicIP:
    Description: Public IP address of the newly created EC2 instance for ES
    Value: !GetAtt 'ES1EC2Instance.PublicIp'

如何使用 CloudFormation Template 更新 elasticsearch.yaml 以使其成为多节点?

最佳答案

尝试将宏添加到您的 Cloudformation 模板中。这是可由宏调用的 lambda 示例。此函数使用 SSM RunCommand 将 bash 命令发送到您的实例,在这种情况下,实例按自动缩放组过滤,但您可以按标签或任何其他属性过滤实例。此外,您还需要向具有 AmazonEC2RoleforSSM 权限的实例添加 IAM 角色。

import json
import boto3
import time
import os

ssm = boto3.client('ssm')
ec2 = boto3.client('ec2')
autoscalingg = boto3.client('autoscaling')

def lambda_handler(event, context):
    # TODO implement
    env = event['environment']
    kafka = event['kafka']
    print (env, kafka)
    status = check_autoscaling_group(env)
    if status:
        add_IP(env, kafka)
        return {
            'body': json.dumps('Function executed, please see the logs!')
        }
    else:
        return {
            'body': json.dumps('The function was not executed, please see the logs!')
        }

def check_autoscaling_group(env):
    response = autoscalingg.describe_auto_scaling_groups(
        AutoScalingGroupNames=[
            'elasticsearch-master-'+str(env)
        ]
    )
    stable = True
    for instance in response['AutoScalingGroups'][0]['Instances']:
        if instance['LifecycleState'] != "InService":
            stable = False
            message = "At least 1 master instance is not in Running status, please wait until all the masters nodes are stable."
            print (message)
            break
    return stable

def add_IP(env, kafka):
    response = ec2.describe_instances(
        Filters=[
            {
                'Name': 'tag:Name',
                'Values': [
                    'elasticsearch-masters-server-'+str(env)
                ]
            },
            {
                'Name': 'instance-state-name',
                'Values': [
                    'running'
                ]
            },
            {
                'Name': 'tag:role',
                'Values': [
                    'master'
                ]
            }
        ]
    )

    for instances in response['Reservations']:
        id = instances['Instances'][0]['InstanceId']
        ip = instances['Instances'][0]['PrivateIpAddress']
        
        #Add ES server IP to Kibana config
        command_to_execute = 'sed -i "s/ELASTICSEARCH_HOSTS: http:\/\/.*/ELASTICSEARCH_HOSTS: http:\/\/'+str(ip)+':9200/g" /home/ubuntu/Kibana/docker-compose.yml'
        execute_in_master(command_to_execute, id, env)
        
        #Add kafka server IP to logstash conf
        kafka = '\\"'+str(kafka)+':9092\\"'
        command_to_execute = 'sed -i "s/bootstrap_servers => .*/bootstrap_servers => ['+str(kafka)+']/g" /home/ubuntu/Logstash/config/logstash/pipeline/my_pipeline.conf'
        execute_in_master(command_to_execute, id, env)
        
        #Add ES server IP to logstash config
        ip = '\\"'+str(ip)+':9200\\"'
        command_to_execute = 'sed -i "s/hosts => .*/hosts => '+str(ip)+'/g" /home/ubuntu/Logstash/config/logstash/pipeline/my_pipeline.conf'
        execute_in_master(command_to_execute, id, env)
        
        #Restart services
        command_to_execute = 'cd /home/ubuntu/Kibana/ && docker-compose up -d'
        execute_in_master(command_to_execute, id, env)
        
        command_to_execute = 'cd /home/ubuntu/Logstash/ && docker-compose up -d'
        execute_in_master(command_to_execute, id, env)
        
        #Just select one master instance
        break
        
def execute_in_master(command_to_execute, id, env):
    print (command_to_execute)
    response = ec2.describe_instances(
        Filters=[
            {
                'Name': 'tag:Name',
                'Values': [
                    'elasticsearch-masters-server-'+str(env)
                ]
            },
            {
                'Name': 'instance-state-name',
                'Values': [
                    'running'
                ]
            },
            {
                'Name': 'tag:role',
                'Values': [
                    'master'
                ]
            }
        ]
    )
    
    for instances in response['Reservations']:
        instance_id = instances['Instances'][0]['InstanceId']
        if instance_id == id:
            response = ssm.send_command(
                InstanceIds=[instance_id],
                DocumentName="AWS-RunShellScript",
                Parameters={'commands': [command_to_execute]}
            )
            time.sleep(1)   

要在 cloudformation 模板内调用宏,您需要添加以下内容:

  ModifyInstances:
    Fn::Transform:
        Name: MacroSetUpCluster
        Parameters:
          env: !Ref MyEnv
          kafka: !Ref MyKafkaIP

堆栈 MacroSetUpCluster 需要事先使用 lambda 函数进行部署。

关于elasticsearch - 如何使用cloudformation模板将两个EC2实例(安装了AMI创建的Elasticsearch)作为多节点?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67020222/

相关文章:

elasticsearch - 批量插入与单次插入

amazon-web-services - 用于应用程序负载均衡器的AWS私有(private)静态ipv4

laravel-5 - Laravel 应用程序(AWS 服务器)中的严重缓存问题

amazon-web-services - 在 Cloudformation 上更新堆栈时 AWS key 发生轮换

elasticsearch - Elasticsearch 中的聚合计数/总和

elasticsearch - SonarQube带有ElasticSearch的备份还原过程

java - 在elasticsearch相似性实现中无法覆盖ClassicSimilarity中的scorePayload函数

amazon-web-services - 无法从同一 VPC 中的其他实例 ping 我的实例私有(private) IP

amazon-web-services - Amazon CloudFormation 如何删除特定堆栈事件并重新运行它们

amazon-web-services - AWS : How to update an existing S3 bucket-policy via CloudFormation?