Mahout安装与测试-基于hadoop单结点伪分布式
2014-02-27 14:39
369 查看
安装JDK
见我之前关于JDK1.7安装的博客:http://blog.csdn.net/stanely_hwang/article/details/18883599
Hadoop单结点伪分布式安装
见我之前关于Hadoop单结点伪分布式安装的博客:http://blog.csdn.net/stanely_hwang/article/details/18884181
Mahout安装与配置
1:下载二进制解压安装:
Mahout下载地址:http://www.apache.org/dyn/closer.cgi/mahout/Mahout下载完后,直接解压。我将Mahout下载到/opt/hadoop下,进入该目录,进行解压操作
$ cd /opt/hadoop$ tar -zxvf mahout-distribution-0.9
2:配置环境变量:
用vim编辑/etc/profile文件, 再文件末尾添加$JHADOOP_HOME, $HADOOP_CONF,$MAHOUT_HOME 环境遍历,详细配置信息如下所示:
JAVA_HOME=/opt/java/jdkPATH=/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/binJRE_HOME=/opt/java/jdkPATH=/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/binexport JAVA_HOMEexport JRE_HOMEexport HADOOP_HOME=/home/andy/hadoop-2.2.0export HADOOP_CONF_DIR=/home/andy/hadoop-2.2.0/confexport MAHOUT_HOME=/opt/hadoop/mahout-distribution-0.9export PATH=$HADOOP_HOME/bin:$MAHOUT_HOME/bin:$PATHexport PATHexport PATH=/sbin:/bin:/usr/sbin:/usr/bin:/sbin
3:启动Hadoop:
到Hadoop安装目录的sbin目录下执行(~/hadoop-2.2.0/sbin目录下)$ ./hadoop-daemon.sh start namenode
$ ./hadoop-daemon.sh start datanode
$ ./yarn-daemon.sh start resourcemanager
$ ./yarn-daemon.sh start nodemanager
4:mahout
--help #检查Mahout是否安装完好,看是否列出了一些算法
进入$MAHOUT_HOME/bin目录$ cd $MAHOUT_HOME/bin
$ ./mahout --help
输出内容如下:
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.Running on hadoop, using /home/andy/hadoop-2.2.0/bin/hadoop and HADOOP_CONF_DIR=/home/andy/hadoop-2.2.0/confMAHOUT-JOB: /opt/hadoop/mahout-distribution-0.9/mahout-examples-0.9-job.jarUnknown program '--help' chosen.Valid program names are: arff.vector: : Generate Vectors from an ARFF file or directory baumwelch: : Baum-Welch algorithm for unsupervised HMM training canopy: : Canopy clustering cat: : Print a file or resource as the logistic regression models would see it cleansvd: : Cleanup and verification of SVD output clusterdump: : Dump cluster output to text clusterpp: : Groups Clustering Output In Clusters cmdump: : Dump confusion matrix in HTML or text formats concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx) cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally. evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes fkmeans: : Fuzzy K-means clustering hmmpredict: : Generate random sequence of observations by given HMM itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering kmeans: : K-means clustering lucene.vector: : Generate Vectors from a Lucene index lucene2seq: : Generate Text SequenceFiles from a Lucene index matrixdump: : Dump matrix in CSV format matrixmult: : Take the product of two matrices parallelALS: : ALS-WR factorization of a rating matrix qualcluster: : Runs clustering experiments and summarizes results in a CSV recommendfactorized: : Compute recommendations using the factorization of a rating matrix recommenditembased: : Compute recommendations using item-based collaborative filtering regexconverter: : Convert text files on a per line basis based on regular expressions resplit: : Splits a set of SequenceFiles into a number of equal splits rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>} rowsimilarity: : Compute the pairwise similarities of the rows of a matrix runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model runlogistic: : Run a logistic regression model against CSV data seq2encoded: : Encoded Sparse Vector generation from Text sequence files seq2sparse: : Sparse Vector generation from Text sequence files seqdirectory: : Generate sequence files (of Text) from a directory seqdumper: : Generic Sequence File dumper seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives seqwiki: : Wikipedia xml dump to sequence file spectralkmeans: : Spectral k-means clustering split: : Split Input data into test and train sets splitDataset: : split a rating dataset into training and probe parts ssvd: : Stochastic SVD streamingkmeans: : Streaming k-means clustering svd: : Lanczos Singular Value Decomposition testnb: : Test the Vector-based Bayes classifier trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model trainlogistic: : Train a logistic regression using stochastic gradient descent trainnb: : Train the Vector-based Bayes classifier transpose: : Take the transpose of a matrix validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors vectordump: : Dump vectors from a sequence file to text viterbi: : Viterbi decoding of hidden states from given output states sequence[andy@localhost bin]$
5:mahout使用准备:
测试数据下载地址:
http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data下载完后,将数据放入$MAHOUT_HOME文件下
创建测试目录
创建测试目录testdata,并将数据导入到testdata中
$ cd $HADOOP_HOME/bin/
$ hadoop fs -mkdir testdata #
$ hadoop fs -put $MAHOUT_HOME/synthetic_control.data testdata
$ hadoop jar /home/hadoop/mahout-distribution-0.7/mahout-examples-0.7-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
$ hadoop fs -lsr output
$ hadoop fs -get output $MAHOUT_HOME/result
$ cd $MAHOUT_HOME/example/result
$ ls
如上图所示表示安装成功!
相关文章推荐
- Mahout安装与测试-基于hadoop单结点伪分布式
- 基于hadoop2.7集群的Spark2.0,Sqoop1.4.6,Mahout0.12.2完全分布式安装
- 基于hadoop2.7集群的Spark2.0,Sqoop1.4.6,Mahout0.12.2完全分布式安装
- 基于hadoop2.7集群的Spark2.0,Sqoop1.4.6,Mahout0.12.2完全分布式安装
- Hadoop生态系统搭建(5)—— 分布式协同服务框架 Zookeeper 的安装部署与测试
- 【Hadoop】8、基于虚拟机的Hadoop1.2.1完全分布式集群安装
- hadoop-2.6.0伪分布式安装后启动服务及测试
- Hadoop2.6.0伪分布式安装及测试笔记
- Hadoop完全分布式集群安装及配置(基于虚拟机)
- Hadoop生态系统搭建(4)——高性能分布式 NoSQL 数据库 HBase 的安装部署与测试
- mahout在hadoop下安装与测试过程
- Hadoop2.2.0伪分布式安装及测试笔记
- Hadoop2.2.0多节点分布式安装及测试
- Hadoop2.3.0上部署Mahout0.10,并测试单机版与分布式版个性化推荐程序
- ubuntu10.04安装与测试hadoop1.1.1版本的伪分布式
- 大数据基础(三)CentOS下基于Hadoop 2.6.2的Mahout 0.12.1安装和使用
- 基于docker的spark-hadoop分布式集群之二: 环境测试
- Sqoop1.4.5(基于Hadoop2.2环境)的安装测试部署
- 在Hadoop分布式集群环境下Mahout安装和运行K-means、协同过滤实例
- Hadoop2.2.0多节点分布式安装及测试