Spark LDA 主题抽取
2015-12-22 20:26
591 查看
本文主要对使用Spark MLlib LDA进行主题抽取时遇到的工程问题做一总结,列出其中的一些小坑,或可供读者借鉴。关于LDA的具体理论等可以自行google。主题预测请参考:Spark LDA 主题预测
开发环境:spark-1.5.2,hadoop-2.6.0,spark-1.5.2要求jdk7+。语料有大概70万篇博客,十亿+词汇量,词典大概有五万左右的词。
spark-submit
–class “LDAExample”
–master local[*]
–driver-memory 32g
target/pack/lib/project.jar
“file:/tmp/documents”
–stopwordFile “file:/tmp/stopwords”
–k 50
–algorithm online
–maxIterations 50
–vocabSize 50000
代码使用sbt 编译,然后提交到spark执行,所以需要打包程序所有依赖
–driver-memory
由于在master处指定了local[*] ,所以此处需要根据训练样本大小设置该参数,否则会内存溢出,如果是yarn或者mesos,则改为设置executor-memory。
–stopwordFile
可以先训练出词典,然后剔除其中不要的词,放入stopwordFile即可,词典对于最终的topic影响很大,所以尽量剔除干扰词。
–k
topic数量,越大则对内存要求越大,执行时长也相应增大
–algorithm
当前支持em和online两种,前者训练出来的是DistributedLDAModel,包含丰富的样本信息,但目前不能直接预测新文档(可以调用toLocal转换为LocalLDAModel)。后者是LocalLDAModel,可以用来预测新文档。online是后来加入的算法,性能更好。gibbs sampling 可能后续推出
–maxIterations
越大则内存和时长越大
–vocabSize
词典最大包含词数
maxResultSize
在程序中设定,存储处理结果,样本数量比较大的话,默认内存是不够的。
SparkConf().set(“spark.driver.maxResultSize”, “5g”)
–docConcentration and topicConcentration
前者为文档对主题的先验概率,后者为主体对词的先验概率,默认为-1,则系统自动赋值。见参考4
docConcentration赋值
* Optimizer-specific parameter settings:
* - EM
* - Value should be > 1.0
* - default = (50 / k) + 1, where 50/k is common in LDA libraries and +1 follows
* Asuncion et al. (2009), who recommend a +1 adjustment for EM.
* - Online
* - Value should be >= 0
* - default = (1.0 / k), following the implementation from
* [[]]https://github.com/Blei-Lab/onlineldavb]].
topicConcentration赋值
* Optimizer-specific parameter settings:
* - EM
* - Value should be > 1.0
* - default = 0.1 + 1, where 0.1 gives a small amount of smoothing and +1 follows
* Asuncion et al. (2009), who recommend a +1 adjustment for EM.
* - Online
* - Value should be >= 0
* - default = (1.0 / k), following the implementation from
* [[]]https://github.com/Blei-Lab/onlineldavb]].
文档预处理
注意训练集每行是一个源文档。SimpleTokenizer 将每行切分为词组,在此处可以通过stopwordFile来过滤词组。在训练集预处理函数preprocess中,wordCounts包含训练集中所有的词及其词频,可理解为map,并且被倒序排序,然后取vocabSize个词作为词典。将词典输出,高频词在前,可以将其中的干扰词或者不重要的词放入stopwordFile,这样反复训练几次,词典的质量就会比较高。参考1和2中训练了维基百科中500万篇文档,最后取词也就一万左右,词典质量越高,topic质量也就越高。
通过训练模型,可以查看不同topic在词典上的分布,以及训练样本的主题分布.
LocalLDAModel包含了topicsMatrix, 是一个vocabSize x k 矩阵.实际上给出了k个主题在词典上的分布.此处矩阵只存储了单词的索引,所以后续使用的话,需要自己保存词典,并且确保索引与该矩阵一致.在预处理训练样本的时候,每篇文档都被处理成”词索引<->词频”向量.
describeTopics(maxTermsPerTopic: Int)可以指定每个topic返回的词数量(已经按照权重降序排列),返回所有主题.
具体如何使用,用户可以参考spark 中LocalLDAModel和DistributedLDAModel的api文档。
2.https://databricks.com/blog/2015/09/22/large-scale-topic-modeling-improvements-to-lda-on-spark.html
3.https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/mllib/LDAExample.scala
4./article/1365677.html
5.http://spark.apache.org/docs/latest/quick-start.html
开发环境:spark-1.5.2,hadoop-2.6.0,spark-1.5.2要求jdk7+。语料有大概70万篇博客,十亿+词汇量,词典大概有五万左右的词。
训练语料代码
:apache/spark/examples/mllib/// scalastyle:off println package org.apache.spark.examples.mllib import java.text.BreakIterator import scala.collection.mutable import scopt.OptionParser import org.apache.log4j.{Level, Logger} import org.apache.spark.{SparkContext, SparkConf} import org.apache.spark.mllib.clustering.{EMLDAOptimizer, OnlineLDAOptimizer, DistributedLDAModel, LDA} import org.apache.spark.mllib.linalg.{Vector, Vectors} import org.apache.spark.rdd.RDD /** * An example Latent Dirichlet Allocation (LDA) app. Run with * {{{ * ./bin/run-example mllib.LDAExample [options] <input> * }}} * If you use it as a template to create your own app, please use `spark-submit` to submit your app. */ object LDAExample { private case class Params( input: Seq[String] = Seq.empty, k: Int = 20, maxIterations: Int = 10, docConcentration: Double = -1, topicConcentration: Double = -1, vocabSize: Int = 10000, stopwordFile: String = "", algorithm: String = "em", checkpointDir: Option[String] = None, checkpointInterval: Int = 10) extends AbstractParams[Params] def main(args: Array[String]) { val defaultParams = Params() val parser = new OptionParser[Params]("LDAExample") { head("LDAExample: an example LDA app for plain text data.") opt[Int]("k") .text(s"number of topics. default: ${defaultParams.k}") .action((x, c) => c.copy(k = x)) opt[Int]("maxIterations") .text(s"number of iterations of learning. default: ${defaultParams.maxIterations}") .action((x, c) => c.copy(maxIterations = x)) opt[Double]("docConcentration") .text(s"amount of topic smoothing to use (> 1.0) (-1=auto)." + s" default: ${defaultParams.docConcentration}") .action((x, c) => c.copy(docConcentration = x)) opt[Double]("topicConcentration") .text(s"amount of term (word) smoothing to use (> 1.0) (-1=auto)." + s" default: ${defaultParams.topicConcentration}") .action((x, c) => c.copy(topicConcentration = x)) opt[Int]("vocabSize") .text(s"number of distinct word types to use, chosen by frequency. (-1=all)" + s" default: ${defaultParams.vocabSize}") .action((x, c) => c.copy(vocabSize = x)) opt[String]("stopwordFile") .text(s"filepath for a list of stopwords. Note: This must fit on a single machine." + s" default: ${defaultParams.stopwordFile}") .action((x, c) => c.copy(stopwordFile = x)) opt[String]("algorithm") .text(s"inference algorithm to use. em and online are supported." + s" default: ${defaultParams.algorithm}") .action((x, c) => c.copy(algorithm = x)) opt[String]("checkpointDir") .text(s"Directory for checkpointing intermediate results." + s" Checkpointing helps with recovery and eliminates temporary shuffle files on disk." + s" default: ${defaultParams.checkpointDir}") .action((x, c) => c.copy(checkpointDir = Some(x))) opt[Int]("checkpointInterval") .text(s"Iterations between each checkpoint. Only used if checkpointDir is set." + s" default: ${defaultParams.checkpointInterval}") .action((x, c) => c.copy(checkpointInterval = x)) arg[String]("<input>...") .text("input paths (directories) to plain text corpora." + " Each text file line should hold 1 document.") .unbounded() .required() .action((x, c) => c.copy(input = c.input :+ x)) } parser.parse(args, defaultParams).map { params => run(params) }.getOrElse { parser.showUsageAsError sys.exit(1) } } private def run(params: Params) { val conf = new SparkConf().setAppName(s"LDAExample with $params") val sc = new SparkContext(conf) Logger.getRootLogger.setLevel(Level.WARN) // Load documents, and prepare them for LDA. val preprocessStart = System.nanoTime() val (corpus, vocabArray, actualNumTokens) = preprocess(sc, params.input, params.vocabSize, params.stopwordFile) corpus.cache() val actualCorpusSize = corpus.count() val actualVocabSize = vocabArray.size val preprocessElapsed = (System.nanoTime() - preprocessStart) / 1e9 println() println(s"Corpus summary:") println(s"\t Training set size: $actualCorpusSize documents") println(s"\t Vocabulary size: $actualVocabSize terms") println(s"\t Training set size: $actualNumTokens tokens") println(s"\t Preprocessing time: $preprocessElapsed sec") println() // Run LDA. val lda = new LDA() val optimizer = params.algorithm.toLowerCase match { case "em" => new EMLDAOptimizer // add (1.0 / actualCorpusSize) to MiniBatchFraction be more robust on tiny datasets. case "online" => new OnlineLDAOptimizer().setMiniBatchFraction(0.05 + 1.0 / actualCorpusSize) case _ => throw new IllegalArgumentException( s"Only em, online are supported but got ${params.algorithm}.") } lda.setOptimizer(optimizer) .setK(params.k) .setMaxIterations(params.maxIterations) .setDocConcentration(params.docConcentration) .setTopicConcentration(params.topicConcentration) .setCheckpointInterval(params.checkpointInterval) if (params.checkpointDir.nonEmpty) { sc.setCheckpointDir(params.checkpointDir.get) } val startTime = System.nanoTime() val ldaModel = lda.run(corpus) val elapsed = (System.nanoTime() - startTime) / 1e9 println(s"Finished training LDA model. Summary:") println(s"\t Training time: $elapsed sec") if (ldaModel.isInstanceOf[DistributedLDAModel]) { val distLDAModel = ldaModel.asInstanceOf[DistributedLDAModel] val avgLogLikelihood = distLDAModel.logLikelihood / actualCorpusSize.toDouble println(s"\t Training data average log likelihood: $avgLogLikelihood") println() } // Print the topics, showing the top-weighted terms for each topic. val topicIndices = ldaModel.describeTopics(maxTermsPerTopic = 10) val topics = topicIndices.map { case (terms, termWeights) => terms.zip(termWeights).map { case (term, weight) => (vocabArray(term.toInt), weight) } } println(s"${params.k} topics:") topics.zipWithIndex.foreach { case (topic, i) => println(s"TOPIC $i") topic.foreach { case (term, weight) => println(s"$term\t$weight") } println() } sc.stop() } /** * Load documents, tokenize them, create vocabulary, and prepare documents as term count vectors. * @return (corpus, vocabulary as array, total token count in corpus) */ private def preprocess( sc: SparkContext, paths: Seq[String], vocabSize: Int, stopwordFile: String): (RDD[(Long, Vector)], Array[String], Long) = { // Get dataset of document texts // One document per line in each text file. If the input consists of many small files, // this can result in a large number of small partitions, which can degrade performance. // In this case, consider using coalesce() to create fewer, larger partitions. val textRDD: RDD[String] = sc.textFile(paths.mkString(",")) // Split text into words val tokenizer = new SimpleTokenizer(sc, stopwordFile) val tokenized: RDD[(Long, IndexedSeq[String])] = textRDD.zipWithIndex().map { case (text, id) => id -> tokenizer.getWords(text) } tokenized.cache() // Counts words: RDD[(word, wordCount)] val wordCounts: RDD[(String, Long)] = tokenized .flatMap { case (_, tokens) => tokens.map(_ -> 1L) } .reduceByKey(_ + _) wordCounts.cache() val fullVocabSize = wordCounts.count() // Select vocab // (vocab: Map[word -> id], total tokens after selecting vocab) val (vocab: Map[String, Int], selectedTokenCount: Long) = { val tmpSortedWC: Array[(String, Long)] = if (vocabSize == -1 || fullVocabSize <= vocabSize) { // Use all terms wordCounts.collect().sortBy(-_._2) } else { // Sort terms to select vocab wordCounts.sortBy(_._2, ascending = false).take(vocabSize) } (tmpSortedWC.map(_._1).zipWithIndex.toMap, tmpSortedWC.map(_._2).sum) } val documents = tokenized.map { case (id, tokens) => // Filter tokens by vocabulary, and create word count vector representation of document. val wc = new mutable.HashMap[Int, Int]() tokens.foreach { term => if (vocab.contains(term)) { val termIndex = vocab(term) wc(termIndex) = wc.getOrElse(termIndex, 0) + 1 } } val indices = wc.keys.toArray.sorted val values = indices.map(i => wc(i).toDouble) val sb = Vectors.sparse(vocab.size, indices, values) (id, sb) } val vocabArray = new Array[String](vocab.size) vocab.foreach { case (term, i) => vocabArray(i) = term } (documents, vocabArray, selectedTokenCount) } } /** * Simple Tokenizer. * * TODO: Formalize the interface, and make this a public class in mllib.feature */ private class SimpleTokenizer(sc: SparkContext, stopwordFile: String) extends Serializable { private val stopwords: Set[String] = if (stopwordFile.isEmpty) { Set.empty[String] } else { val stopwordText = sc.textFile(stopwordFile).collect() stopwordText.flatMap(_.stripMargin.split("\\s+")).toSet } // Matches sequences of Unicode letters private val allWordRegex = "^(\\p{L}*)$".r // Ignore words shorter than this length. private val minWordLength = 3 def getWords(text: String): IndexedSeq[String] = { val words = new mutable.ArrayBuffer[String]() // Use Java BreakIterator to tokenize text into words. val wb = BreakIterator.getWordInstance wb.setText(text) // current,end index start,end of each word var current = wb.first() var end = wb.next() while (end != BreakIterator.DONE) { // Convert to lowercase val word: String = text.substring(current, end).toLowerCase // Remove short words and strings that aren't only letters word match { case allWordRegex(w) if w.length >= minWordLength && !stopwords.contains(w) => words += w case _ => } current = end try { end = wb.next() } catch { case e: Exception => // Ignore remaining text in line. // This is a known bug in BreakIterator (for some Java versions), // which fails when it sees certain characters. end = BreakIterator.DONE } } words } } // scalastyle:on printl
执行命令:
“` bashspark-submit
–class “LDAExample”
–master local[*]
–driver-memory 32g
target/pack/lib/project.jar
“file:/tmp/documents”
–stopwordFile “file:/tmp/stopwords”
–k 50
–algorithm online
–maxIterations 50
–vocabSize 50000
遇到的坑
sbt pack代码使用sbt 编译,然后提交到spark执行,所以需要打包程序所有依赖
–driver-memory
由于在master处指定了local[*] ,所以此处需要根据训练样本大小设置该参数,否则会内存溢出,如果是yarn或者mesos,则改为设置executor-memory。
–stopwordFile
可以先训练出词典,然后剔除其中不要的词,放入stopwordFile即可,词典对于最终的topic影响很大,所以尽量剔除干扰词。
–k
topic数量,越大则对内存要求越大,执行时长也相应增大
–algorithm
当前支持em和online两种,前者训练出来的是DistributedLDAModel,包含丰富的样本信息,但目前不能直接预测新文档(可以调用toLocal转换为LocalLDAModel)。后者是LocalLDAModel,可以用来预测新文档。online是后来加入的算法,性能更好。gibbs sampling 可能后续推出
–maxIterations
越大则内存和时长越大
–vocabSize
词典最大包含词数
maxResultSize
在程序中设定,存储处理结果,样本数量比较大的话,默认内存是不够的。
SparkConf().set(“spark.driver.maxResultSize”, “5g”)
–docConcentration and topicConcentration
前者为文档对主题的先验概率,后者为主体对词的先验概率,默认为-1,则系统自动赋值。见参考4
docConcentration赋值
* Optimizer-specific parameter settings:
* - EM
* - Value should be > 1.0
* - default = (50 / k) + 1, where 50/k is common in LDA libraries and +1 follows
* Asuncion et al. (2009), who recommend a +1 adjustment for EM.
* - Online
* - Value should be >= 0
* - default = (1.0 / k), following the implementation from
* [[]]https://github.com/Blei-Lab/onlineldavb]].
topicConcentration赋值
* Optimizer-specific parameter settings:
* - EM
* - Value should be > 1.0
* - default = 0.1 + 1, where 0.1 gives a small amount of smoothing and +1 follows
* Asuncion et al. (2009), who recommend a +1 adjustment for EM.
* - Online
* - Value should be >= 0
* - default = (1.0 / k), following the implementation from
* [[]]https://github.com/Blei-Lab/onlineldavb]].
文档预处理
注意训练集每行是一个源文档。SimpleTokenizer 将每行切分为词组,在此处可以通过stopwordFile来过滤词组。在训练集预处理函数preprocess中,wordCounts包含训练集中所有的词及其词频,可理解为map,并且被倒序排序,然后取vocabSize个词作为词典。将词典输出,高频词在前,可以将其中的干扰词或者不重要的词放入stopwordFile,这样反复训练几次,词典的质量就会比较高。参考1和2中训练了维基百科中500万篇文档,最后取词也就一万左右,词典质量越高,topic质量也就越高。
模型使用
训练结束,可以在模型上调用save方法保存模型,已备后续使用.通过训练模型,可以查看不同topic在词典上的分布,以及训练样本的主题分布.
LocalLDAModel包含了topicsMatrix, 是一个vocabSize x k 矩阵.实际上给出了k个主题在词典上的分布.此处矩阵只存储了单词的索引,所以后续使用的话,需要自己保存词典,并且确保索引与该矩阵一致.在预处理训练样本的时候,每篇文档都被处理成”词索引<->词频”向量.
describeTopics(maxTermsPerTopic: Int)可以指定每个topic返回的词数量(已经按照权重降序排列),返回所有主题.
具体如何使用,用户可以参考spark 中LocalLDAModel和DistributedLDAModel的api文档。
参考:
1.https://databricks.com/blog/2015/03/25/topic-modeling-with-lda-mllib-meets-graphx.html2.https://databricks.com/blog/2015/09/22/large-scale-topic-modeling-improvements-to-lda-on-spark.html
3.https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/mllib/LDAExample.scala
4./article/1365677.html
5.http://spark.apache.org/docs/latest/quick-start.html
相关文章推荐
- 读书笔记 --《数学之美》_ 中文分词
- 输出1——n的排列(深度优先搜索)
- 你们是不是很缺大数据工程师?
- JavaScript进阶(十一)JsJava2.0版本
- JavaScript进阶(十一)JsJava2.0版本
- 虚拟机下的三种网络连接模式
- bzoj:1726: [Usaco2006 Nov]Roadblocks第二短路
- 流程控制结构
- Linux:source 命令的一点小细节
- geoserver中sld设置
- leetcode -- Jump Game II -- 贪心,要看
- Linux内核版本代号
- 栈的实现
- linux time
- crt安全警告问题
- Android 自定义圆形进度条
- IOS 成功失败的Block写法
- Android开源资料索引整理
- hibernate反向工程 (eclipse和myeclipse)
- JUnit使用