英文:
java.lang.RuntimeException: Unsupported literal type class org.apache.spark.sql.Dataset /Spark - JAVA
问题
我有以下的代码:
Dataset<Row> dataframe = dfjoin.select(when(df1.col("dateTracking_hour_minute")
    .between(df.col("heureDebut"), df.col("heureFin")),
    dfjoin.filter(col("acc_status").equalTo(0).and(col("acc_previous").equalTo(1)))));
当我运行时,它抛出异常:
java.lang.RuntimeException: Unsupported literal type class org.apache.spark.sql.Dataset [ID_tracking: bigint, tracking_time: timestamp ... 109 more fields]
    at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:78)
    at org.apache.spark.sql.catalyst.expressions.Literal$.$anonfun$create$2(literals.scala:164)
    at scala.util.Failure.getOrElse(Try.scala:222)
    at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:164)
    at org.apache.spark.sql.functions$.typedLit(functions.scala:127)
    at org.apache.spark.sql.functions$.lit(functions.scala:110)
    at org.apache.spark.sql.functions$.when(functions.scala:1341)
    at org.apache.spark.sql.functions.when(functions.scala)
    at factory.Arret_Alert.check(Arret_Alert.java:44)
有什么想法吗?
谢谢。
英文:
I have the following code :
  Dataset <Row> dataframe =   dfjoin.select(when(df1.col("dateTracking_hour_minute")
                          .between(df.col("heureDebut"),df.col("heureFin")),
                  dfjoin.filter(col("acc_status").equalTo(0).and(col("acc_previous").equalTo(1)))));
When I run , it throws an exception :
java.lang.RuntimeException: Unsupported literal type class org.apache.spark.sql.Dataset [ID_tracking: bigint, tracking_time: timestamp ... 109 more fields]
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:78)
at org.apache.spark.sql.catalyst.expressions.Literal$.$anonfun$create$2(literals.scala:164)
at scala.util.Failure.getOrElse(Try.scala:222)
at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:164)
at org.apache.spark.sql.functions$.typedLit(functions.scala:127)
at org.apache.spark.sql.functions$.lit(functions.scala:110)
at org.apache.spark.sql.functions$.when(functions.scala:1341)
at org.apache.spark.sql.functions.when(functions.scala)
at factory.Arret_Alert.check(Arret_Alert.java:44)
Any ideas ?
Thank you
答案1
得分: 0
以下是您要翻译的内容:
当您编写的条件不正确时,请查看以下 Java 文档中的 when 部分-
 *   // Java:
   *   people.select(when(col("gender").equalTo("male"), 0)
   *     .when(col("gender").equalTo("female"), 1)
   *     .otherwise(2))
   * }}}
   *
   * @group normal_funcs
   * @since 1.4.0
   */
  def when(condition: Column, value: Any): Column = withExpr {
您提到的 condition 列(when 的第一个参数)是正确的 -
df1.col("dateTracking_hour_minute")
                          .between(df.col("heureDebut"), df.col("heureFin"))
但第二个参数是数据集类型,这是不正确的。支持的字面量类型为 -
 def apply(v: Any): Literal = v match {
    case i: Int => Literal(i, IntegerType)
    case l: Long => Literal(l, LongType)
    case d: Double => Literal(d, DoubleType)
    case f: Float => Literal(f, FloatType)
    case b: Byte => Literal(b, ByteType)
    case s: Short => Literal(s, ShortType)
    case s: String => Literal(UTF8String.fromString(s), StringType)
    case c: Char => Literal(UTF8String.fromString(c.toString), StringType)
    case b: Boolean => Literal(b, BooleanType)
    case d: BigDecimal => Literal(Decimal(d), DecimalType.fromBigDecimal(d))
    case d: JavaBigDecimal =>
      Literal(Decimal(d), DecimalType(Math.max(d.precision, d.scale), d.scale()))
    case d: Decimal => Literal(d, DecimalType(Math.max(d.precision, d.scale), d.scale))
    case t: Timestamp => Literal(DateTimeUtils.fromJavaTimestamp(t), TimestampType)
    case d: Date => Literal(DateTimeUtils.fromJavaDate(d), DateType)
    case a: Array[Byte] => Literal(a, BinaryType)
    case a: Array[_] =>
      val elementType = componentTypeToDataType(a.getClass.getComponentType())
      val dataType = ArrayType(elementType)
      val convert = CatalystTypeConverters.createToCatalystConverter(dataType)
      Literal(convert(a), dataType)
    case i: CalendarInterval => Literal(i, CalendarIntervalType)
    case null => Literal(null, NullType)
    case v: Literal => v
    case _ =>
      throw new RuntimeException("Unsupported literal type " + v.getClass + " " + v)
  }
参考- spark GitHub 仓库
请更改代码中的这部分内容 -
dfjoin.filter(col("acc_status").equalTo(0).and(col("acc_previous").equalTo(1)))
英文:
the when condition you have written is incorrect. Please check the below java doc for when-
 *   // Java:
   *   people.select(when(col("gender").equalTo("male"), 0)
   *     .when(col("gender").equalTo("female"), 1)
   *     .otherwise(2))
   * }}}
   *
   * @group normal_funcs
   * @since 1.4.0
   */
  def when(condition: Column, value: Any): Column = withExpr {
the condition column(argument1 to when) that you mentioned is correct -
df1.col("dateTracking_hour_minute")
                          .between(df.col("heureDebut"),df.col("heureFin"))
but the second argument IS of type dataset which is incorrect. The supported literal types are -
 def apply(v: Any): Literal = v match {
    case i: Int => Literal(i, IntegerType)
    case l: Long => Literal(l, LongType)
    case d: Double => Literal(d, DoubleType)
    case f: Float => Literal(f, FloatType)
    case b: Byte => Literal(b, ByteType)
    case s: Short => Literal(s, ShortType)
    case s: String => Literal(UTF8String.fromString(s), StringType)
    case c: Char => Literal(UTF8String.fromString(c.toString), StringType)
    case b: Boolean => Literal(b, BooleanType)
    case d: BigDecimal => Literal(Decimal(d), DecimalType.fromBigDecimal(d))
    case d: JavaBigDecimal =>
      Literal(Decimal(d), DecimalType(Math.max(d.precision, d.scale), d.scale()))
    case d: Decimal => Literal(d, DecimalType(Math.max(d.precision, d.scale), d.scale))
    case t: Timestamp => Literal(DateTimeUtils.fromJavaTimestamp(t), TimestampType)
    case d: Date => Literal(DateTimeUtils.fromJavaDate(d), DateType)
    case a: Array[Byte] => Literal(a, BinaryType)
    case a: Array[_] =>
      val elementType = componentTypeToDataType(a.getClass.getComponentType())
      val dataType = ArrayType(elementType)
      val convert = CatalystTypeConverters.createToCatalystConverter(dataType)
      Literal(convert(a), dataType)
    case i: CalendarInterval => Literal(i, CalendarIntervalType)
    case null => Literal(null, NullType)
    case v: Literal => v
    case _ =>
      throw new RuntimeException("Unsupported literal type " + v.getClass + " " + v)
  }
ref- spark github repo
Please change this part of your code-
dfjoin.filter(col("acc_status").equalTo(0).and(col("acc_previous").equalTo(1)))
专注分享java语言的经验与见解,让所有开发者获益!

评论