Job in state DEFINE instead of RUNNING for scala wordcount example #320
Labels
api: bigtable
Issues related to the GoogleCloudPlatform/cloud-bigtable-examples API.
type: question
Request for information or clarification. Not an issue.
Hi,
I am trying your example from spark-shell of dataproc cluster:
https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/blob/master/scala/spark-wordcount/src/main/scala/com/example/bigtable/spark/wordcount/WordCount.scala
Its giving me error at the line:
val job = new Job(conf)
Error:
java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
Solution i tried:
I changed above line as below:
val job = new org.apache.hadoop.mapreduce.Job(conf)
still it is giving me the same error.
When i searched on google for the same issue, some people are saying 'run it on spark-submit' instead of 'spark-shell'.
but i want to run on spark-shell only.
Please suggest me a solution for how can i achieve the same using spark-shell?
The text was updated successfully, but these errors were encountered: