提问者:小点点

在Spark 2.0中读取本地镶木地板文件


在Spark 1.6.2中,我可以通过执行非常简单的操作来读取本地拼花文件:

SQLContext sqlContext = new SQLContext(new SparkContext("local[*]", "Java Spark SQL Example"));
DataFrame parquet = sqlContext.read().parquet("file:///C:/files/myfile.csv.parquet");
parquet.show(20);

我正在尝试升级到Spark 2.0.0并通过运行来实现相同的目标:

SparkSession spark = SparkSession.builder().appName("Java Spark SQL Example").master("local[*]").getOrCreate();
Dataset<Row> parquet = spark.read().parquet("file:///C:/files/myfile.csv.parquet");
parquet.show(20);

这是在Windows上运行的,来自intellij(Java项目),我目前没有使用hadoop集群(稍后会出现,但目前我只是试图正确处理逻辑并熟悉API)。

不幸的是,当使用park 2.0运行时,代码给出了一个异常:

Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/[my intellij project path]/spark-warehouse
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.<init>(Path.java:172)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
at org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95)
at org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:427)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:411)
at lili.spark.ParquetTest.main(ParquetTest.java:15)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:C:/[my intellij project path]/spark-warehouse
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.<init>(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:203)
... 21 more

我不知道为什么它试图触摸我项目目录中的任何东西——是否有任何我遗漏的配置在火花1.6.2中被合理默认,但在2.0中不再如此?换句话说,在windows上的火花2.0中读取本地拼花文件的最简单方法是什么?


共1个答案

匿名用户

看起来您遇到了SPARK-15893。Spark开发人员将文件读取从1.6.2更改为2.0.0。从JIRA的评论中,您应该转到conf\spack-default. conf文件并添加:

"spark.sql.warehouse.dir=file:///C:/Experiment/spark-2.0.0-bin-without-hadoop/spark-warehouse"

然后,您应该能够像这样加载拼花文件:

DataFrame parquet = sqlContext.read().parquet("C:/files/myfile.csv.parquet");