您的位置:首页 > 大数据

大数据集群-这是一篇longlong的博客

2017-09-06 23:52 155 查看
[TOP]

ip设置:

服务器中共虚拟了6台虚拟机:

hadoop1 :内存8G,硬盘2T

hadoop2 :内存8G,硬盘2T

hadoop3 :内存8G,硬盘2T

zookeeper :内存8G,硬盘2T

redis :内存8G,硬盘2T

ethings :内存8G,硬盘2T

192.168.56.101 hadoop1 == hadoop2.7.4 + zookeeper3.4.10 + hbase1.2.6 + hive2.1.1 + mariadb5.5

192.168.56.102 hadoop2 == hadoop2.7.4 + zookeeper3.4.10 + hbase1.2.6

192.168.56.103 hadoop3 == hadoop2.7.4 + zookeeper3.4.10 + hbase1.2.6

192.168.56.104 zookeeper == (备用)

192.168.56.105 redis == redis4.0.1 + mysql(mariadb5.5)

192.168.56.106 ethings == 应用平台

192.168.56.107 hadoop3 == hadoop2.7.4 + zookeeper3.4.10 + hbase1.2.6

192.168.56.108 hadoop3 == hadoop2.7.4 + zookeeper3.4.10 + hbase1.2.6

这是整个在给的服务器上搭建的环境,

目前数据存储在 hadoop2 和 hadoop3 上。

存储平台开关

启动:

1. 先启动zookeeper:
hadoop1:zkServer.sh start
hadoop2:zkServer.sh start
hadoop3:zkServer.sh start
2. 启动hbase
hadoop1:start-hbase.sh
3. 启动hadoop集群
hadoop1:start-all.sh

关闭:

1. 先关闭zookeeper:
hadoop1:zkServer.sh stop
hadoop2:zkServer.sh stop
hadoop3:zkServer.sh stop
2. 关闭hbase
hadoop1:stop-hbase.sh
3. 关闭hadoop集群
hadoop1:stop-all.sh


hadoop

core-site.xml

<configuration>
<!-- 指定HDFS老大(namenode)的通信地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
</property>
</configuration>


hdfs-site.xml

<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/hdfs/data</value>
</property>
<!-- 设置hdfs副本数量 -->
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>


客户端,添加环境变量:HADOOP_USER_NAME=hadoop (如果在客户端的IDE中调试需要设置这个环境变量,如eclipse、idea等),这样就不会发生访问权限问题了。

另外,说一下,建立虚拟机时候,就默认使用hadoop用户建立,这样就不用专门去建立这个用户和组了。

mapred-site.xml

<configuration>
<property>
        <name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
</property>
</configuration>


slaves

hadoop2
hadoop3
hadoop4
hadoop5


yarn-site.xml

<configuration>
<!-- reducer取数据的方式是mapreduce_shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop1:8088</value>
</property>
</configuration>


hbase

hbase-site.xml

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop1:9000/hbase</value>
</property>
<value>hadoop1,hadoop2,hadoop3,hadoop4,hadoop5</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>60000000</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
</configuration>


hbase-env.sh

export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
# Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"


regionservers

hadoop2
hadoop3
hadoop4
hadoop5


vbox设置共享目录

mount -t vboxsf share /home/hadoop/mount_point/


虚拟机中需要安装VBoxGuestAdditions.iso

挂载: mount /dev/cdrom /home/hadoop/mount_point/

cd /home/hadoop/mount_point/

sh ./VBoxLinuxAdditions.run

执行过程中可能有错,根据错误日志修改

需要

sudo yum install gcc kernal kernal-devel

成功后就可以挂载了

mount -t vboxsf share /home/hadoop/mount_point/

Zookeeper设置

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/tmp/data
dataLogDir=/usr/local/zookeeper/tmp/logs
clientPort=2181
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
server.3=hadoop3:2888:3888
#server.4=hadoop4:2888:3888
#server.5=hadoop5:2888:3888
#maxClientCnxns=60
#autopurge.snapRetainCount=3
#autopurge.purgeInterval=1


配置好后在
dataDir
目录中创建myid,并相应的设置 1,2,3,4,5,

HIVE 设置

<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
<!-- 设置 hive仓库的HDFS上的位置 -->
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>

<!--资源临时文件存放位置 -->
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/local/hive/tmp/${hive.session.id}_resources</value>
</property>
<!-- Hive在0.9版本之前需要设置hive.exec.dynamic.partition为true, Hive在0.9版本之后默认为true -->
<property>
<name>hive.exec.dynamic.partition</name>
<value>true</value>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<!-- 修改日志位置 -->
<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/local/hive/tmp/HiveJobsLog</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/local/hive/tmp/ResourcesLog</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/usr/local/hive/tmp/HiveRunLog</value>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/usr/local/hive/tmp/OperationLogs</value>
</property>
<!-- 配置HWI接口 -->
<property>
<name>hive.hwi.war.file</name>
<value>${env:HWI_WAR_FILE}</value>
</property>
<property>
<name>hive.hwi.listen.host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
</property>

<!-- Hiveserver2已经不再需要hive.metastore.local这个配置项了(hive.metastore.uris为空,则表示是metastore在本地,否则就是远程)远程的话直接配置hive.metastore.uris即可 -->
<!-- property>
<name>hive.metastore.uris</name>
<value>thrift://m1:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property -->
<property>
<name>hive.server2.thrift.bind.host</name>
<value>hadoop1</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.http.port</name>
<value>10001</value>
</property>
<property>
<name>hive.server2.thrift.http.path</name>
<value>cliservice</value>
</property>
<!-- HiveServer2的WEB UI -->
<property>
<name>hive.server2.webui.host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hive.server2.webui.port</name>
<value>10002</value>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>755</value>
</property>
<!-- 下面hive.aux.jars.path这个属性里面你这个jar包地址如果是本地的记住前面要加file://不然找不到, 而且会报org.apache.hadoop.hive.contrib.serde2.RegexSerDe错误 -->
<property>
<name>hive.aux.jars.path</name>
<value/>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>
</property>
<property>
<name>hive.auto.convert.join</name>
<value>true</value>
</property>
<property>
<name>spark.dynamicAllocation.enabled</name>
<value>true</value>
<description>动态分配资源</description>
</property>
<!-- 使用Hive on spark时,若不设置下列该配置会出现内存溢出异常 -->
<property>
<name>spark.driver.extraJavaOptions</name>
<value>-XX:PermSize=128M -XX:MaxPermSize=512M</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateColumns</name>
<value>true</value>
</property>
</configuration>


hive的配置感觉是最复杂的了,上面使用的是mysql作为元数据管理,如果用centos7的话,系统默认自带mariadb数据库跟mysql是一样的。

Redis4.0.1

安装:

$ wget http://download.redis.io/releases/redis-4.0.1.tar.gz $ tar xzf redis-4.0.1.tar.gz
$ cd redis-4.0.1
$ make


make成功后,执行

$ src/redis-server


测试:

$ src/redis-cli
redis> set foo bar
OK
redis> get foo
"bar"


如果make不成功,可以参考README.md,使用

make MALLOC=libc


再编译一次,默认使用

make MALLOC=jemalloc


VirtualBox 磁盘复制

虚拟机做好一个后,复制一下就会得到另一台虚拟机,但是有时候,并不是通过VirtualBox界面工具复制的,直接手动复制粘贴,这样这个虚拟机是启动不来的,所以需要如下方法:

cmd至virtualBox运行目录后,执行

VBoxManage.exe internalcommands sethduuid G:\vbox\xxx.vdi


将修改VDI的UUID

修改成功提示UUID changed to: 428079cd-830d-49b1-bfde-feac051b4d3e

run
VBoxManage internalcommands sethduuid <VDI/VMDK file>
twice (the first time is just to conveniently generate an UUID, you could use any other UUID generation method instead)

open the
.vbox
file in a text editor

replace the UUID found in
<Machine uuid="{...}"
with the UUID you got when you ran sethduuid the first time

replace the UUID found in
<HardDisk uuid="{...}"
and in
<Image uuid="{}"
(towards the end) with the UUID you got when you ran sethduuid the second time

Spark

spark-env.sh

export SCALA_HOME=/usr/local/scala
export JAVA_HOME=/usr/local/jdk
export SPARK_MASTER_IP=hadoop1
export SPARK_WORKER_MEMORY=4G
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop


我这里虚拟机的内存是8G的。

设置静态IP地址

[hadoop@zookeeper network-scripts]$ cat ifcfg-enp0s3
TYPE="Ethernet"
#BOOTPROTO="dhcp"
BOOTPROTO="static"
IPADDR=192.168.56.104
NETMASK=255.255.255.0
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="09f62fa6-36bc-4782-95ad-63fda20b194f"
DEVICE="enp0s3"
ONBOOT="yes"


关闭防火墙

sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service
sudo systemctl status firewalld.service

启动一个服务:systemctl start firewalld.service
关闭一个服务:systemctl stop firewalld.service
重启一个服务:systemctl restart firewalld.service
显示一个服务的状态:systemctl status firewalld.service
在开机时启用一个服务:systemctl enable firewalld.service
在开机时禁用一个服务:systemctl disable firewalld.service
查看服务是否开机启动:systemctl is-enabled firewalld.service
查看已启动的服务列表:systemctl list-unit-files|grep enabled


从VirtulBox转VMware磁盘

vmkload_mod multiextent
vmkfstools -i hadoop3-disk1.vmdk hadoop3-disk2.vmdk -d thin
vmkfstools -U hadoop3-disk1.vmdk
vmkfstools -E hadoop3-disk2.vmdk hadoop3-disk1.vmdk
vmkload_mod -u multiextent


Hadoop退出安全模式

1. 在HDFS配置文件中修改安全模式阀值

在hdfs-site.xml中设置安全阀值属性,属性值默认为0.999f,如果设为1则不进行安全检查

<property>
<name>dfs.safemode.threshold.pct</name>
<value>0.999f</value>
<description>
Specifies the percentage of blocks that should satisfy
the minimal replication requirement defined by dfs.replication.min.
Values less than or equal to 0 mean not to wait for any particular
percentage of blocks before exiting safemode.
Values greater than 1 will make safe mode permanent.
</description>
</property>


因为是在配置文件中进行硬修改,不利于管理员操作和修改,因此不推荐此方式

2. 直接在bash输入指令脱离安全模式(推荐)

在安全模式下输入指令:

hadoop dfsadmin -safemode leave


即可退出安全模式。

hdfs文件保存到本地

hadoop fs -get [-ignorecrc] [-crc] 复制文件到本地文件系统
hadoop fs -get hdfs://host:port/user/hadoop/file localfile


各模式下运行spark自带实例SparkPi

2.1 local模式

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master local lib/spark-examples-1.0.0-hadoop2.2.0.jar


2.2 standalone模式

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://192.168.123.101:7077 lib/spark-examples-1.0.0-hadoop2.2.0.jar


2.3 on-yarn-cluster模式

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster lib/spark-examples-1.0.0-hadoop2.2.0.jar


2.4 on-yarn-client模式

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client lib/spark-examples-1.0.0-hadoop2.2.0.jar


2.5 参考

http://spark.apache.org/docs/latest/submitting-applications.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐