是时候用 ES 拯救发际线啦

docker swarm 部署集群,其中一个节点映射端口供外部连接,但发现映射端口这个节点无法加入到集群,请大佬帮忙分析一下原因。。。

Elasticsearch | 作者 meguoe | 发布于2020年08月27日 | 阅读数:4611

你好,我是用docker swarm部署的集群,三个节点,其中一个映射端口到宿主机,运行后发现映射端口的节点无法加入集群。我测试了如果不映射端口的话三个节点是都可以加入集群的,麻烦各位大佬帮忙分析一下原因,查了一下午资料,没找到问题。。
 
stack yml 内容如下
 version: '3.3'
 
services:
  node1:
    container_name: node1
    image: daocloud.io/library/elasticsearch:7.5.1
    networks:
      - node_net
    ports:
      - 9200:9200
    environment:
      - node.name=node1
      - cluster.name=hiddos-cluster
      - discovery.seed_hosts=node2,node3
      - cluster.initial_master_nodes=node1,node2,node3
      - TZ=Asia/Shanghai
      - ES_JAVA_OPTS=-Xms1g -Xmx1g
          
  node2:
    container_name: node2
    image: daocloud.io/library/elasticsearch:7.5.1
    networks:
      - node_net
    environment:
      - node.name=node2
      - cluster.name=hiddos-cluster
      - discovery.seed_hosts=node1,node3
      - cluster.initial_master_nodes=node1,node2,node3
      - TZ=Asia/Shanghai
      - ES_JAVA_OPTS=-Xms1g -Xmx1g
          
  node3:
    container_name: node3
    image: daocloud.io/library/elasticsearch:7.5.1
    networks:
      - node_net
    environment:
      - node.name=node3
      - cluster.name=hiddos-cluster
      - discovery.seed_hosts=node1,node2
      - cluster.initial_master_nodes=node1,node2,node3
      - TZ=Asia/Shanghai
      - ES_JAVA_OPTS=-Xms1g -Xmx1g

networks:
  node_net:
    driver: overlay
    attachable: true
    
volumes:
  node_data:
    driver: local
已邀请:

meguoe

赞同来自:

node1 异常信息:{"type": "server", "timestamp": "2020-08-27T19:01:04,295+08:00", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "hiddos-cluster", "node.name": "node1", "message": "master not discovered or elected yet, an election requires 2 nodes with ids [7LcpiqIlRY2r2rUMs44K-w, aoq39acGStu5PxhWzJNLZQ], have discovered [{node1}{aoq39acGStu5PxhWzJNLZQ}{ecD3uMkoTmmbwPuDZvO_XQ}{10.0.0.166}{10.0.0.166:9300}{dilm}{ml.machine_memory=67375939584, xpack.installed=true, ml.max_open_jobs=20}, {node2}{7LcpiqIlRY2r2rUMs44K-w}{Q_1xAR31SXuxpseIF2d3IQ}{10.0.26.6}{10.0.26.6:9300}{dilm}{ml.machine_memory=67375939584, ml.max_open_jobs=20, xpack.installed=true}, {node3}{OR4GHHNsQtu60iknChtChQ}{ZSePihuqSbaUgSSxyC45gw}{10.0.26.8}{10.0.26.8:9300}{dilm}{ml.machine_memory=67375939584, ml.max_open_jobs=20, xpack.installed=true}] which is a quorum; discovery will continue using [10.0.26.5:9300, 10.0.26.7:9300] from hosts providers and [{node1}{aoq39acGStu5PxhWzJNLZQ}{ecD3uMkoTmmbwPuDZvO_XQ}{10.0.0.166}{10.0.0.166:9300}{dilm}{ml.machine_memory=67375939584, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 1, last-accepted version 0 in term 0" }

Charele - Cisco4321

赞同来自:

报错信息看起来很乱,其实理一下并不复杂。
 
10.0.0.166
10.0.26.6
10.0.26.8
10.0.26.5
10.0.26.7
 
我不懂docker,从来没有用过。
从ES角度来看,你3个节点,应该只有3个IP吧。为什么有5个呢?

God_lockin

赞同来自:

你这个docker的网络里是不是有同集群名字的其他节点?

要回复问题请先登录注册