The Sca...Spark

How to save a spark DataFrame as

2019-03-15  本文已影响1人  雨笋情缘

Apache Spark does not support native CSV output on disk.

You have 4 available solutions though:

1.  You can convert your Dataframe into an RDD :

方式一:

def convertToReadableString(r : Row) = ???

df.rdd.map{ convertToReadableString }.saveAsTextFile(filepath)

This will create a folder filepath. Under the file path, you'll find partitions files (e.g part-000*)

What I usually do if I want to append all the partitions into a big CSV is

cat filePath/part* > mycsvfile.csv

Some will use coalesce(1,false) to create one partition from the RDD. It's usually a bad practice, since it may overwhelm the driver by pulling all the data you are collecting to it.

Note that df.rdd will return an RDD[Row].

2.With Spark <2, you can use databricks spark-csv library:

Spark 1.4+:

方式二:

df.write.format("com.databricks.spark.csv").save(filepath)

Spark 1.3:

方式三:

df.save(filepath,"com.databricks.spark.csv")

With Spark 2.x the spark-csv package is not needed as it's included in Spark.

方式四:

df.write.format("csv").save(filepath)

You can convert to local Pandas data frame and use to_csv method (PySpark only).

Note: Solutions 1, 2 and 3 will result in CSV format files (part-*) generated by the underlying Hadoop API that Spark calls when you invoke save. You will have one part- file per partition.

另存为txt文件

方式一:

bank.rdd.repartition(1).saveAsTextFile("/tmp/df2.txt")

note: bank is a DataFrame

原文地址:

https://stackoverflow.com/questions/33174443/how-to-save-a-spark-dataframe-as-csv-on-disk

https://community.hortonworks.com/questions/42838/storage-dataframe-as-textfile-in-hdfs.html

上一篇下一篇

猜你喜欢

热点阅读