您的位置:首页 > 编程语言 > Python开发

pyspark notebook中文显示问题的解决

2017-01-04 15:45 393 查看
上一篇文章在HDP2.5平台上使用Anaconda搭建了notebook环境,使用pyspark进行spark分析。在读取文本文件时发现存在无法显示中文的问题。尝试各种encoding方案未成功。

将python升级到python3,运行时提示python版本与executor的python版本不兼容:

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/hdp/2.5.3.0-37/spark/python/lib/pyspark.zip/pyspark/worker.py", line 64, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 2.7 than that in driver 3.5, PySpark cannot run with different minor versions


按照这篇文章的介绍http://blog.csdn.net/huobanjishijian/article/details/52538078,在每个spark集群节点上添加环境变量:

export PYSPARK_PYTHON=/root/anaconda3/bin/python

重启所有节点,重启大数据集群服务,中文文件能够正常读取和显示。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  python pyspark