Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to compile spark-sql-perf for spark 2.1.0 #97

Open
ccwgit opened this issue Jan 25, 2017 · 4 comments
Open

failed to compile spark-sql-perf for spark 2.1.0 #97

ccwgit opened this issue Jan 25, 2017 · 4 comments

Comments

@ccwgit
Copy link

ccwgit commented Jan 25, 2017

Hello,

I got the following error when trying to build the spark-sql-perf package for Spark 2.1.0. Did anyone see this before and any idea to fix it? Thanks

Steps:
git clone https://github.com/databricks/spark-sql-perf.git
cd spark-sql-perf
vi build.sbt to change the sparkVersion := "2.1.0"
./build/sbt clean package

Compilation errors:
spark-sql-perf/src/main/scala/com/databricks/spark/sql/perf/Query.scala:89: Cannot prove that (org.apache.spark.sql.catalyst.trees.TreeNode[_$6], Int) forSome { type $6 } <:< (T, U).
[error] val indexMap = physicalOperators.map { case (index, op) => (op, index) }.toMap
[error] ^
[error] spark-sql-perf/src/main/scala/com/databricks/spark/sql/perf/Query.scala:97: value execute is not a member of org.apache.spark.sql.catalyst.trees.TreeNode[
$6]
[error] newNode.execute().foreach((row: Any) => Unit)
[error] ^

@ghost
Copy link

ghost commented May 18, 2017

@ccwgit Me too. So how to solve this issue? Thanks.

@kachini
Copy link

kachini commented May 18, 2017

got exactly the same problem with Spark 2.1.1. Wondering if anyone got it fixed.

@ghost
Copy link

ghost commented May 22, 2017

I updated my code. But I don't know if is true.

val breakdownResults = if (includeBreakdown) {
        val depth = queryExecution.executedPlan.collect { case p: SparkPlan => p }.size
        val physicalOperators = (0 until depth).map(i => (i, queryExecution.executedPlan))
        val indexMap = physicalOperators.map { case (index, op) => (op, index) }.toMap
        val timeMap = new mutable.HashMap[Int, Double]

        physicalOperators.reverse.map {
          case (index, node) =>
            messages += s"Breakdown: ${node.simpleString}"
            val newNode: SparkPlan = buildDataFrame.queryExecution.executedPlan
            val executionTime = measureTimeMs {
              newNode.execute().foreach((row: Any) => Unit)
            }
            timeMap += ((index, executionTime))

            val childIndexes = node.children.map(indexMap)
            val childTime = childIndexes.map(timeMap).sum

            messages += s"Breakdown time: $executionTime (+${executionTime - childTime})"

            BreakdownResult(
              node.nodeName,
              node.simpleString.replaceAll("#\\d+", ""),
              index,
              childIndexes,
              executionTime,
              executionTime - childTime)
        }
      } else {
        Seq.empty[BreakdownResult]
      }

@hongbo
Copy link

hongbo commented Oct 31, 2018

Failed to compile the benchmark on spark2.3.2. Looks like some libraries missed.

sh-4.2# bin/run --help
Using as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
[info] Loading project definition from /opt/spark/bin/spark-sql-perf-master/project
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to spark-sql-perf (in build file:/opt/spark/bin/spark-sql-perf-master/)
[warn] Credentials file /root/.bintray/.credentials does not exist
[info] Compiling 66 Scala sources to /opt/spark/bin/spark-sql-perf-master/target/scala-2.11/classes...
[warn] /opt/spark/bin/spark-sql-perf-master/src/main/scala/com/databricks/spark/sql/perf/CpuProfile.scala:107: non-variable type argument String in type pattern Seq[String] (the underlying of Seq[String]) is unchecked since it is eliminated by erasure
[warn] case Row(stackLines: Seq[String], count: Long) => stackLines.map(toStackElement) -> count :: Nil
[warn] ^
[warn] /opt/spark/bin/spark-sql-perf-master/src/main/scala/com/databricks/spark/sql/perf/tpcds/TPCDS.scala:30: no valid targets for annotation on value sqlContext - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param)
[warn] class TPCDS(@transient sqlContext: SQLContext)
[warn] ^
[warn] /opt/spark/bin/spark-sql-perf-master/src/main/scala/com/databricks/spark/sql/perf/tpch/TPCH.scala:167: no valid targets for annotation on value sqlContext - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param)
[warn] class TPCH(@transient sqlContext: SQLContext)
[warn] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:48: not found: type ClassificationNode
[error] .asInstanceOf[ClassificationNode]
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:59: not found: type RegressionNode
[error] .asInstanceOf[RegressionNode]
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:180: not found: type RegressionLeafNode
[error] new RegressionLeafNode(prediction, impurity, impurityStats)
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:182: not found: type ClassificationLeafNode
[error] new ClassificationLeafNode(prediction, impurity, impurityStats)
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:196: not found: type RegressionInternalNode
[error] new RegressionInternalNode(prediction, impurity, gain,
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:197: not found: type RegressionNode
[error] leftChild.asInstanceOf[RegressionNode], rightChild.asInstanceOf[RegressionNode],
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:197: not found: type RegressionNode
[error] leftChild.asInstanceOf[RegressionNode], rightChild.asInstanceOf[RegressionNode],
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:200: not found: type ClassificationInternalNode
[error] new ClassificationInternalNode(prediction, impurity, gain,
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:201: not found: type ClassificationNode
[error] leftChild.asInstanceOf[ClassificationNode], rightChild.asInstanceOf[ClassificationNode],
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:201: not found: type ClassificationNode
[error] leftChild.asInstanceOf[ClassificationNode], rightChild.asInstanceOf[ClassificationNode],
[error] ^
[warn] three warnings found
[error] 10 errors found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 10 s, completed Oct 31, 2018 11:52:58 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants