Spark standalone HA
Spark standalone HA配置Spark standalone HA
主机:node1,node2,node3master: node1,node2slave:node2,node3
修改配置文件:node1,node3: spark-env.shexport SPARK_MASTER_IP=node1export SPARK_MASTER_PORT=7077export SPARK_WORKER_CORES=1export SPARK_WORKER_INSTANCES=1export SPARK_WORKER_MEMORY=1024mexport SPARK_LOCAL_DIRS=/data/spark/dataDirexport SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node1:2181,node2:2181,node3:2181 -Dspark.deploy.zookeeper.dir=/sparkHA"node2: spark-env.shexport SPARK_MASTER_IP=node2node2与node1的差别仅在此
启动脚本:ZooKeeper已经启动完毕,这里没有说明ZooKeeper的配置和启动。spark启动脚本node1:/sbin/start-all.shnode2:/sbin/start-master.sh
测试HA停掉node1 master/sbin/stop-master.sh
访问node2的master没有停掉node1的master,访问node2停掉node1的master之后,访问node2
摘自:http://www.yjs001.cn/bigdata/spark/24262537393501817043.html
Spark standalone HA
页:
[1]