您的位置:首页 > 其它

Spark:K-Means||算法原理、K-Means++算法原理--52

2015-07-06 20:50 316 查看
K-Means||算法原理为:(在基于Spark MLlib中的K-Means算法初始化K个类簇中心时的方法之一,另一种是随机选取K个中心)

1.为每个run的K-Means随机选择一个初中心点,然后再平均随机选择2k个点,每个点被选择的概率和该点到类簇中心的距离成正比;

2.对选出的这2k个点做一次K-Means++,找出k个初始化类簇中心,在这2k个左右的点上基于k初始化类簇中心执行多次Lloyd算法;

3.得到的类簇中心作为最终的初始化类簇中心。

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

K-Means++算法原理为:(初始的聚类中心之间的相互距离要尽可能的远)

1.从输入的数据点集合中随机选择一个点作为第一个聚类中心;

2.对于数据集中的每一个点X,计算其与聚类中心的距离D(X);

3.选择一个D(X)最大的点作为新的聚类中心;

4.重复2和3步直到K个聚类中心被选出;

5.利用K个初始聚类中心运行K-Means。

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

最近看了下Spark-1.4.0的官方说明整理了一点东西,估计以后用的到

常用的API:

Transformation:

map(func)、filter(func)、flatMap(func)、mapPartitions(func)、mapPartitionsWithIndex(func)、sample(withReplacement,fraction,seed)、

union(otherDataset)、intersection(otherDAtaset)、distinct(numTasks)、groupByKey(numTasks)、reduceByKey(func,(numTasks))、

aggregateByKey(zeroValue)(Seq op,comb op,【numTasks】)、sortByKey(【ascending】,【numTasks】)、join(otherDataset,【numTasks】)、

Cogroup(otherDataset,【numTasks】)、cartesian(otherDataset)、pipe(command,【envVers】)、coalesce(numParititions)、

repartition(numParititions)、repartitionAndSortWithinPartitions(partitions)

Actions:

reduce(func)、collect()、count()、first()、take(n)、takeSample(withReplacement,num,【seed】)、takeOrdered(n,【ordering】)、

saveAsTestFile(path)、saveAsSequenceFile(path)、saveAsObjectFile(path)、CountByKey()、foreach(func)

///////////////////////////////

Accumulators:

创建一个累加器从一个初始值v通过调用SparkContext.accumulator(v)。任务在集群上运行就可以添加使用add或者+=操作符(在Scala和Python)。然而,他们不能读accumulator的值。只有运行程序时可以读取accumulator的value,使用它value模型。

scala> val accum = sc.accumulator(0, "My Accumulator")
accum: spark.Accumulator[Int] = 0
scala> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x)
...
10/09/29 18:41:08 INFO SparkContext: Tasks finished in 0.317106 s
scala> accum.value
res2: Int = 10
object VectorAccumulatorParam extends AccumulatorParam[Vector] {
def zero(initialValue: Vector): Vector = {
Vector.zeros(initialValue.size)
}
def addInPlace(v1: Vector, v2: Vector): Vector = {
v1 += v2
}
}

// Then, create an Accumulator of this type:
val vecAccum = sc.accumulator(new Vector(...))(VectorAccumulatorParam)
val accum = sc.accumulator(0)
data.map { x => accum += x; f(x) }
// Here, accum is still 0 because no actions have caused the `map` to be computed.
/////////////////////////////////////////////////////////////////////////////

常用的Sparkstreaming API:

Transformation of DStream:

map(func)、flatMap(func)、filter(func)、repertition(numParititions)重新分区、union(otherStream)、count()、reduce(func)、

countByValue()、reduceByKey(func,【numTasks】)、join(otherStream,【numTasks】)、Cogroup(otherStream,【numTasks】)、

transform(func)、updateStateByKey(func)

Window Operation:

window(windowlength,slideInterval)、CountByWindow(windowlength,slideInterval)、reduceByWindow(func,windowlength,slideInterval)、

reduceByKeyAndWindow(func,windowlength,slideInterval,【numTasks】)、CountByValueAndWindow(windowlength,slideInterval。【numTasks】)

///////////////////////////////////////////////////////////////////////////////
DataFrame和sparkSQL:

DataFrame:是一个以命名列方式组织的分布式数据集,等同于关系型数据库中的一个表;对数据源的支持能力允许应用程序可以轻松的组合来自不同数据源的数据。

DataFrame使用于lazy的方式。

创建一个DataFrame基于JSON文件的内容:

val sc: SparkContext // An existing SparkContext.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val df = sqlContext.read.json("examples/src/main/resources/people.json")
// Displays the content of the DataFrame to stdout
df.show()
 <div id="tab_scala_2" class="tab-pane active" data-lang="scala"><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">sc</span><span class="k">:</span> <span class="kt">SparkContext</span> <span class="c1">// An existing SparkContext.</span>
<span class="k">val</span> <span class="n">sqlContext</span> <span class="k">=</span> <span class="k">new</span> <span class="n">org</span><span class="o">.</span><span class="n">apache</span><span class="o">.</span><span class="n">spark</span><span class="o">.</span><span class="n">sql</span><span class="o">.</span><span class="nc">SQLContext</span><span class="o">(</span><span class="n">sc</span><span class="o">)</span>

<span class="c1">// Create the DataFrame</span>
<span class="k">val</span> <span class="n">df</span> <span class="k">=</span> <span class="n">sqlContext</span><span class="o">.</span><span class="n">read</span><span class="o">.</span><span class="n">json</span><span class="o">(</span><span class="s">"examples/src/main/resources/people.json"</span><span class="o">)</span>

<span class="c1">// Show the content of the DataFrame</span>
<span class="n">df</span><span class="o">.</span><span class="n">show</span><span class="o">()</span>
<span class="c1">// age  name</span>
<span class="c1">// null Michael</span>
<span class="c1">// 30   Andy</span>
<span class="c1">// 19   Justin</span>

<span class="c1">// Print the schema in a tree format</span>
<span class="n">df</span><span class="o">.</span><span class="n">printSchema</span><span class="o">()</span>
<span class="c1">// root</span>
<span class="c1">// |-- age: long (nullable = true)</span>
<span class="c1">// |-- name: string (nullable = true)</span>

<span class="c1">// Select only the "name" column</span>
<span class="n">df</span><span class="o">.</span><span class="n">select</span><span class="o">(</span><span class="s">"name"</span><span class="o">).</span><span class="n">show</span><span class="o">()</span>
<span class="c1">// name</span>
<span class="c1">// Michael</span>
<span class="c1">// Andy</span>
<span class="c1">// Justin</span>

<span class="c1">// Select everybody, but increment the age by 1</span>
<span class="n">df</span><span class="o">.</span><span class="n">select</span><span class="o">(</span><span class="n">df</span><span class="o">(</span><span class="s">"name"</span><span class="o">),</span> <span class="n">df</span><span class="o">(</span><span class="s">"age"</span><span class="o">)</span> <span class="o">+</span> <span class="mi">1</span><span class="o">).</span><span class="n">show</span><span class="o">()</span>
<span class="c1">// name    (age + 1)</span>
<span class="c1">// Michael null</span>
<span class="c1">// Andy    31</span>
<span class="c1">// Justin  20</span>

<span class="c1">// Select people older than 21</span>
<span class="n">df</span><span class="o">.</span><span class="n">filter</span><span class="o">(</span><span class="n">df</span><span class="o">(</span><span class="s">"age"</span><span class="o">)</span> <span class="o">></span> <span class="mi">21</span><span class="o">).</span><span class="n">show</span><span class="o">()</span>
<span class="c1">// age name</span>
<span class="c1">// 30  Andy</span>

<span class="c1">// Count people by age</span>
<span class="n">df</span><span class="o">.</span><span class="n">groupBy</span><span class="o">(</span><span class="s">"age"</span><span class="o">).</span><span class="n">count</span><span class="o">().</span><span class="n">show</span><span class="o">()</span>
<span class="c1">// age  count</span>
<span class="c1">// null 1</span>
<span class="c1">// 19   1</span>
<span class="c1">// 30   1</span></code>  </div>
JSON数据集可以负载DataFrame。

从spark-1.3.0~spark-1.4.0:新:SQLContext.read(读取数据)、DataFrame.write(写入数据);

弃用的API:SQLContext.parquetFile、SQLContext.jsonFile;

///////////////////////////////////////////////////////////////////////

Bagle(Pregel on spark):简单的图形处理模型;
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: