我试图加载Parquet文件在火花作为数据帧-
val df= spark.read.parquet(path)
我得到了-
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 12, 10.250.2.32): java.lang.UnsupportedOperationException: Complex types not supported.
在浏览代码时,我意识到有一个检查火花VectorizedParquetRecordReader.java(初始化内部)-
Type t = requestedSchema.getFields().get(i);
if (!t.isPrimitive() || t.isRepetition(Type.Repetition.REPEATED)) {
throw new UnsupportedOperationException("Complex types not supported.");
}
所以我认为它在重复方法上是失败的。有人能给我建议解决这个问题的方法吗?
我的Parquet数据就像-
Key1 = value1
Key2 = value1
Key3 = value1
Key4:
.list:
..element:
...key5:
....list:
.....element:
......certificateSerialNumber = dfsdfdsf45345
......issuerName = CN=Microsoft Windows Verification PCA, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......subjectName = CN=Microsoft Windows, OU=MOPR, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......thumbprintAlgorithm = Sha1
......thumbprintContent = sfdasf42dsfsdfsdfsd
......validFrom = 2009-12-07 21:57:44.000000
......validTo = 2011-03-07 21:57:44.000000
....list:
.....element:
......certificateSerialNumber = dsafdsafsdf435345
......issuerName = CN=Microsoft Root Certificate Authority, DC=microsoft, DC=com
......subjectName = CN=Microsoft Windows Verification PCA, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......thumbprintAlgorithm = Sha1
......thumbprintContent = sdfsdfdsf43543
......validFrom = 2005-09-15 21:55:41.000000
......validTo = 2016-03-15 22:05:41.000000
我怀疑key4可能会因为嵌套树而引发问题。输入数据是Json类型的,所以可能parquet不理解Json那样的复杂级别
我在Sparkhttps://issues.apache.org/jira/browse/HIVE-13744找到bug
但它指出Hive复杂类型问题。不确定,这是否会解决镶木地板的问题?
更新1进一步探索镶木地板,我得出以下结论——
我有5个拼花文件创建,而火花. write其中2个拼花文件是空的,所以应该是ArrayType的列的模式是字符串类型,当我试图整体读取它时,我看到了上述异常
取1
SPARK-12854向量化Parquet阅读器指示从Spark 2.0.0开始的“ColumnarBatch支持结构和数组”(参见GitHub拉取请求10820)
和SPARK-13518默认启用向量化拼花阅读器,同样从Spark 2.0.0开始,处理属性火花. sql.parque.enableVectorizedReader
(参见GitHub提交e809074)
我的2美分:禁用“矢量化阅读器”优化,看看会发生什么。
取2
由于问题已经缩小到一些空文件,这些文件不显示与“真实”文件相同的模式,我的3美分:用lark. sql.parque.mergeSchema
进行实验,看看合并后来自真实文件的模式是否优先。
除此之外,您可能会尝试在写入时通过某种重新分区来消除空文件,例如coalesce(1)
(好吧,1有点讽刺,但你明白了)。