-
Notifications
You must be signed in to change notification settings - Fork 392
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Task failed while writing rows #187
Comments
Facing a similar issue, did you find a fix for this? |
和你的报错一模一样,我的因为需要写入的数据里面有特殊字符\001导致的。把特殊字符去掉就好了 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I'm trying to use TF with SPARK. I can either run a spark session locally or on a cluster but my problem remains the same. I have Spark version 3.1.1 Scala 2.12.10, OpenJDK 1.8.0_282 and tensor flow version 2.5.0. I compiled both the "spark-tensorflow-connector" and the "tensorflow hadoop" using the commands listed on the readme (
mvn clean install
). I then added".config("spark.jars", "C:\spark\spark-3.1.1-bin-hadoop2.7\jars\spark-tfrecord_2.12-0.3.0")"
to my spark connection.I have looked into this repo for similar problems and all provided solutions haven't worked for me. Running either the provided example or custom code results in the same error :
I've also tried to compilefor my specific version (using the examples provided in the readme) which resulted in maven packages not found error.
Any suggestion ? Or should I use any other method to run TF within a Spark env?
The text was updated successfully, but these errors were encountered: