您的位置:首页 > 运维架构

Hadoop 2.2.0部署安装(笔记,单机安装)

2014-07-05 15:17 267 查看

SSH无密安装与配置

具体配置步骤:

◎ 在root根目录下创建.ssh目录 (必须root用户登录)

cd /root & mkdir .ssh

chmod 700 .ssh & cd .ssh

◎ 创建密码为空的 RSA 密钥对:

ssh-keygen -t rsa -P ""

◎ 在提示的对称密钥名称中输入 id_rsa将公钥添加至 authorized_keys 中:

cat id_rsa.pub >> authorized_keys

chmod 644 authorized_keys # 重要

◎ 编辑 sshd 配置文件 /etc/ssh/sshd_config ,把 #AuthorizedKeysFile .ssh/authorized_keys 前面的注释取消掉。

◎ 重启 sshd 服务:

service sshd restart

◎ 测试 SSH 连接。连接时会提示是否连接,按回车后会将此公钥加入至 knows_hosts 中:

ssh localhost# 输入用户名密码

Hadoop 2.2.0部署安装

具体步骤如下:

◎ 下载文件。

◎ 解压hadoop 配置环境。

#在root根目录下创建hadoop文件夹

mkdir hadoop;

cd hadoop;

#将hadoop 2.2.0 安装文件放置到hadoop目录文件夹下。

#解压hadoop 2.2.0 文件

tar -zxvf hadoop-2.2.0.tar.gz

#进入hadoop -2.2.0 文件夹

cd hadoop-2.2.0

#进入hadoop配置文件夹

cd etc/hadoop

#修改core-site.xml

vi core-site.xml 添加以下信息(hadoop.tmp.dir、fs.default.name):

# Copyright 2011 The Apache Software Foundation

#

# Licensed to the Apache Software Foundation (ASF) under one

# or more contributor license agreements.  See the NOTICE file

# distributed with this work for additional information

# regarding copyright ownership.  The ASF licenses this file

# to you under the Apache License, Version 2.0 (the

# "License"); you may not use this file except in compliance

# with the License.  You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0 
#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are

# optional.  When running a distributed configuration it is best to

# set JAVA_HOME in this file, so that it is correctly defined on

# remote nodes.

# The java implementation to use.

export JAVA_HOME=/usr/local/jdk1.7

# The jsvc implementation to use. Jsvc is required to run secure datanodes.

#export JSVC_HOME=${JSVC_HOME}

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

# Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.

for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do

if [ "$HADOOP_CLASSPATH" ]; then

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f

else

export HADOOP_CLASSPATH=$f

fi

done

# The maximum amount of heap to use, in MB. Default is 1000.

#export HADOOP_HEAPSIZE=

#export HADOOP_NAMENODE_INIT_HEAPSIZE=""

# Extra Java runtime options.  Empty by default.

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

# Command specific options appended to HADOOP_OPTS when specified

export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"

export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"

export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"

# The following applies to multiple commands (fs, dfs, fsck, distcp etc)

export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"

#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"

# On secure datanodes, user to run the datanode as after dropping privileges

export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}

# Where log files are stored.  $HADOOP_HOME/logs by default.

#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

# Where log files are stored in the secure data environment.

export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}

# The directory where pid files are stored. /tmp by default.

# NOTE: this should be set to a directory that can only be written to by

#       the user that will run the hadoop daemons.  Otherwise there is the

#       potential for a symlink attack.

export HADOOP_PID_DIR=${HADOOP_PID_DIR}

export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}

# A string representing this instance of hadoop. $USER by default.

export HADOOP_IDENT_STRING=$USER


View Code

设置hadoop环境变量[HADOOP_HOME]。

vi /etc/profile 输入 export HADOOP_HOME=/root/hadoop/hadoop-2.2.0

source /etc/profile 让环境变量生效。

测试hadoop环境变量是否生效:

echo $HADOOP_HOME

/root/hadoop/hadoop-2.2.0

进入hadoop安装目录,进入bin目录,格式化hdfs。

./hadoop namenode –format

启动hadoop ,进入hadoop安装目录,进入sbin目录。

./start-all.sh

验证安装,登录 http://localhost:50070/

文章转载请注明出处:/article/4845368.html

相关推荐:

sqoop安装参考:/article/4845367.html
hive安装参考:/article/4845366.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: