ZooKeeper Install & Configuration

单机版的伪集群搭建

1. Environment

2. Install JDK

  • 从Oracle官网下载Java8安装包

    https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html

    安装包名:jdk-8u271-linux-x64.tar.gz

    NAS中也有安装包 存放在:公共资源>Tools>Develop>JDK

  • 将安装包通过SFTP上传至Centos7 服务器中指定目录,这里放在/home目录下

    cd /home
    ls -l
    
  • 创建安装目录

    mkdir -p /usr/local/java/
    
  • 解压至安装目录

    tar -zxvf jdk-8u271-linux-x64.tar.gz -C /usr/local/java/
    
  • 设置环境变量,打开文件

    vi /etc/profile
    
  • 在末尾添加

    export JAVA_HOME=/usr/local/java/jdk1.8.0_271
    export JRE_HOME=${JAVA_HOME}/jre
    export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
    export PATH=${JAVA_HOME}/bin:$PATH
    
  • 保存文件

  • 使环境变量生效

    source /etc/profile
    
  • 添加软连接

    ln -s /usr/local/java/jdk1.8.0_271/bin/java /usr/bin/java
    
  • 检查

    java -version
    

3. Install Zookeeper

  • 将安装包通过SFTP上传至Centos7 服务器中指定目录,这里放在/home目录下

    cd /home
    ls -l
    
  • 解压至安装目录

    tar -zxvf apache-zookeeper-3.6.3-bin.tar.gz -C /usr/local
    cd /usr/local && ls -l
    mv apache-zookeeper-3.6.3-bin zookeeper
    
  • 进入/usr/local/zookeeper/conf/目录下:

    cd /usr/local/zookeeper/conf/
    
  • 在/zookeeper-3.6.3/conf/文件夹下,复制zoo_sample.cfg文件,分别命名为

    zoo1.cfg、zoo2.cfg、zoo3.cfg

    cp -i zoo_sample.cfg zoo1.cfg
    cp -i zoo_sample.cfg zoo2.cfg
    cp -i zoo_sample.cfg zoo3.cfg
    
  • 修改里面的配置信息:

    # zoo1.cfg
    clientPort=2181
    dataDir=/tmp/zookeeper/data_1
    dataLogDir=/tmp/zookeeper/logs_1
    server.1=localhost:2287:3387
    server.2=localhost:2288:3388
    server.3=localhost:2289:3389
    
    # zoo2.cfg
    clientPort=2182
    dataDir=/tmp/zookeeper/data_2
    dataLogDir=/tmp/zookeeper/logs_2
    server.1=localhost:2287:3387
    server.2=localhost:2288:3388
    server.3=localhost:2289:3389
    
    # zoo3.cfg
    clientPort=2183
    dataDir=/tmp/zookeeper/data_3
    dataLogDir=/tmp/zookeeper/logs_3
    server.1=localhost:2287:3387
    server.2=localhost:2288:3388
    server.3=localhost:2289:3389
    
  • 在这里要切记,/tmp/zookeeper/data_1目录自己手动建好,并且创建一个myid文件,以此类推:
    对应data_1下的myid值为0,
    对应data_2下的myid值为1,
    对应data_3下的myid值为2,

    mkdir -pv /tmp/zookeeper/data_1 && echo 1 > /tmp/zookeeper/data_1/myid
    mkdir -pv /tmp/zookeeper/logs_1
    
    mkdir -pv /tmp/zookeeper/data_2 && echo 2 > /tmp/zookeeper/data_2/myid
    mkdir -pv /tmp/zookeeper/logs_2
    
    mkdir -pv /tmp/zookeeper/data_3 && echo 3 > /tmp/zookeeper/data_3/myid
    mkdir -pv /tmp/zookeeper/logs_3
    
  • 配置hosts文件

    vi /etc/hosts
    server.1=localhost:2287:3387
    server.2=localhost:2288:3388
    server.3=localhost:2289:3389
    
  • 在/usr/local/zookeeper/目录下执行启动命令

    cd /usr/local/zookeeper/
    
    # 启动命令
    ./bin/zkServer.sh start zoo1.cfg
    ./bin/zkServer.sh start zoo2.cfg
    ./bin/zkServer.sh start zoo3.cfg
    
    # 停止命令
    ./bin/zkServer.sh stop zoo1.cfg
    ./bin/zkServer.sh stop zoo2.cfg
    ./bin/zkServer.sh stop zoo3.cfg
    
    # 查看状态命令
    ./bin/zkServer.sh status zoo1.cfg
    ./bin/zkServer.sh status zoo1.cfg
    ./bin/zkServer.sh status zoo1.cfg
    

    输出信息:

    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo1.cfg
    Starting zookeeper ... STARTED
    
  • 查看端口是否启动

    netstat -lnptu
    
  • 查看启动状态

    jps
    
  • 查看日志

    cat /usr/local/zookeeper/logs/zookeeper-root-server-[hostname].out
    

4. 关于报错

  • 129545120190522144747510951742368.png

    如果是刚启动zookeeper报出这个错误,然后不再不错,那就是正常现象。是由于有的节点启动,而有的节点还没有启动,这段时间已经启动的节点就会去努力寻找没有启动的节点,就会报出这样的错误。这是一种正常现象,无需多虑。

    如果启动很长时间之后还在报错,可以尝试下面的解决方案

    修改每个节点的zoo.cfg文件中的相对应的

    server.x=0.0.0.0:2287:3387

    server.x=0.0.0.0:2288:3388

    server.x=0.0.0.0:2289:3389

    修改完成后重启zookeeper,报错消失。

5. 配置自启动

  • 1节点

    vi /etc/systemd/system/zookeeper1.service
    
    [Unit]
    Description=Zookeeper-2181
    After=network.target
    
    [Service]
    Type=forking
    ExecStart=/usr/local/zookeeper/bin/zkServer.sh start zoo1.cfg
    ExecStop=/usr/local/zookeeper/bin/zkServer.sh stop zoo1.cfg
    Restart=always
    RestartSec=10
    TimeoutSec=360
    
    [Install]
    WantedBy=multi-user.target
    

    保存退出后,执行以下命令

    systemctl daemon-reload
    systemctl start zookeeper1.service
    systemctl enable zookeeper1.service
    
  • 2节点

    vi /etc/systemd/system/zookeeper2.service
    
    [Unit]
    Description=Zookeeper-2182
    After=network.target
    
    [Service]
    Type=forking
    ExecStart=/usr/local/zookeeper/bin/zkServer.sh start zoo2.cfg
    ExecStop=/usr/local/zookeeper/bin/zkServer.sh stop zoo2.cfg
    Restart=always
    RestartSec=10
    TimeoutSec=360
    
    [Install]
    WantedBy=multi-user.target
    

    保存退出后,执行以下命令

    systemctl daemon-reload
    systemctl start zookeeper2.service
    systemctl enable zookeeper2.service
    
  • 3节点

    vi /etc/systemd/system/zookeeper3.service
    
    [Unit]
    Description=Zookeeper-2183
    After=network.target
    
    [Service]
    Type=forking
    ExecStart=/usr/local/zookeeper/bin/zkServer.sh start zoo3.cfg
    ExecStop=/usr/local/zookeeper/bin/zkServer.sh stop zoo3.cfg
    Restart=always
    RestartSec=10
    TimeoutSec=360
    
    [Install]
    WantedBy=multi-user.target
    

    保存退出后,执行以下命令

    systemctl daemon-reload
    systemctl start zookeeper3.service
    systemctl enable zookeeper3.service
    

6. Configuration 防火墙

  • 查看Zookeeper 伪集群使用端口

    netstat -lnpt
    
  • 开放端口,这里使用2181-2183端口

    firewall-cmd --permanent --zone=public --add-port=2181-2183/tcp
    
  • 重启防火墙

    firewall-cmd --reload
    
  • 查看防火墙,已开放端口列表

    firewall-cmd --list-ports