您的位置:首页 > 其它

sparkrdd转dataframe的两种方式

2016-08-31 10:48 267 查看
package l847164916

import java.sql.{DriverManager, ResultSet}
import java.util.Properties

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{Row, SQLContext, SaveMode}
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.types._

/**
* Created by Administrator on 2016/9/2.
*/
object Test {
def main(args: Array[String]): Unit = {
val url = "jdbc:mysql://xxxxxxx:3306/hyn_profile"
val prop = new Properties()
prop.setProperty("user", "root")
prop.setProperty("password", "xxxxx")
prop.setProperty("driver","com.mysql.jdbc.Driver")
val conf = new SparkConf().setAppName("test").setMaster("local")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val schema = StructType(
StructField("name",StringType) ::
StructField("rate1",DoubleType) ::
StructField("rate2",DoubleType)
:: Nil
)
val rdd1 = sc.textFile("D:/LCH/hellomoto/data").map{r => r.split(" ")}.map{r => (r(0),r(1))}
val rdd2 = sc.textFile("D:/LCH/hellomoto/data1").map{r => r.split(" ")}.map{r => (r(0),r(1))}
//1.外连接才会产生option类型
//2.getOrElse()设置join后空值的默认值,注意类型要与schema匹配
val rdd = rdd1.fullOuterJoin(rdd2).map{case ((name,(rate1,rate2)))=>(name,rate1,rate2)}.map(r=> Row.apply(r._1,r._2.getOrElse(0.0).toString.toDouble,r._3.getOrElse(0.0).toString.toDouble))
val df = sqlContext.createDataFrame(rdd,schema)
df.write.mode(SaveMode.Overwrite).jdbc(url,"test",prop)
/*
(二)隐式转换DF:
import sqlContext.implicits._
val df = rdd1.fullOuterJoin(rdd2).map{case ((name,(rate1,rate2)))=>(name,rate1,rate2)}.toDF("name","rate1","rate2")
df.write.mode(SaveMode.Overwrite).jdbc(url,"test",prop)

*/
}
}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: