Spark on Intellij IDEA
2015-11-11 16:44
211 查看
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111163436400-1500943678.png)
添加scala插件
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111163508150-476344925.png)
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111163522947-589353011.png)
如果网络有问题,可以手动下载插件安装包(http://plugins.jetbrains.com/plugin/?id=1347),在上面选择“Install plugin from disk”,在弹出的标签里选择你存放插件的路径,点OK即可。
下面新建scala工程
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111163608525-1265011277.png)
据说scala2.11问题多,最好用2.10
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111163637275-1727186500.png)
点击finish之后,啥都没有
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111163725900-858915416.png)
到官网http://spark.apache.org/下载spark,解压,从File->Project Structre中导入[spark root path]/lib/spark-assembly-1.5.1-hadoop2.6.0.jar
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111163756369-920737491.png)
在scr下新建scala object
/** * Created by Manhua on 2015/11/11. */ import org.apache.spark._ import scala.math.random object s { def main(args: Array[String]) { val spark = new SparkContext("local", "Spark Pi") val slices = 2 val n = 100000 * slices val count = spark.parallelize(1 to n, slices).map { i => val x = random * 2 - 1 val y = random * 2 - 1 if (x * x + y * y < 1) 1 else 0 }.reduce(_ + _) println("Pi is roughly " + 4.0 * count / n) spark.stop() } }
右击test.scala,编译,运行
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111164201212-1062512374.png)
打包jar运行
生成程序包之前要先建立一个artifacts,File -> Project Structure -> Artifacts -> + -> Jars -> From moudles with dependencies,然后随便选一个class作为主class。
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111165428869-1856693213.png)
选择jar包入口main class
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111170006072-669007955.png)
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111170128978-568503806.png)
ok之后显示如下
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111170423322-1937437581.png)
把complie output之外的依赖包remove掉。Name可以按需修改
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111170047087-1175101423.png)
然后就可以build了
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111170831369-2057485970.png)
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111170839212-299129927.png)
输出目录
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111170926322-692519632.png)
上传运行
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111171004244-538994507.png)
![](http://images2015.cnblogs.com/blog/364496/201511/364496-20151111171011572-400933082.png)
相关文章推荐
- MySQL - 基本操作
- spark on yarn模式:yarn命令杀除当前的application
- Hadoop+Hbase+Zookeeper(独立)环境搭建
- 第四章 Controller接口控制器详解(7 完)——跟着开涛学SpringMVC
- 使用js动态添加组件
- springMvc入门配置Validation
- 使用js动态添加组件
- MySql连接CommunicationsException错误
- BAT解密:互联网技术发展之路(4)- 存储层技术剖析
- 在Windows Server 2008如何安装Subversion
- 常用设计模式(三)——工厂设计模式
- PHP - MySQL数据库
- 网格部件和树型部件查找并定位焦点
- BAT解密:互联网技术发展之路(3)- 牛逼公司的技术架构都是这个范
- spark+hive win7开发环境配置
- Android——分别获取导航栏、状态栏高度
- 第四章 Controller接口控制器详解(6)——跟着开涛学SpringMVC
- PHP - 图像处理
- Spark报错:Failed to locate the winutils binary in the hadoop binary path
- BAT解密:互联网技术发展之路(2)- 业务如何驱动技术发展