Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set log level #3

Open
lucaro opened this issue Nov 1, 2019 · 29 comments
Open

Set log level #3

lucaro opened this issue Nov 1, 2019 · 29 comments

Comments

@lucaro
Copy link

lucaro commented Nov 1, 2019

System information

  • TensorFlow version (use command below): 1.15

Describe the current behavior

When using the current Java bindings for TensorFlow, the log gets filled with output from every operation when a model is evaluated. I was unable to find a way to set the log level via the Java API. Is this at all possible? If not, could you please consider adding this functionality in future versions?

@karllessard
Copy link
Collaborator

@lucaro, can you please provide an example of such log? I don't see any in my case and I'm wondering if that is because your graph is taking a different path than mine.

@lucaro
Copy link
Author

lucaro commented Nov 4, 2019

Sure, the following is just a small snippet from the produced output. The lines which start with date and time information are written to the standard error stream while the others are written to standard out. The Graph used while generating this output is this one: https://github.com/lucaro/VideoSaliencyFilter/blob/master/omcnn2clstm.pb

2019-11-04 10:07:54.017198: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_2_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.017342: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_2_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.017445: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_2_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.017603: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_2_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/Pad: (Pad): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/linearout: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/Greater: (Greater): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/Cast: (Cast): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.017765: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_2_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_4_1_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/Pad: (Pad): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/linearout: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/Greater: (Greater): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/Cast: (Cast): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.017904: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_2_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNDownconv_3_1_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
inference/resize_52/ResizeBilinear: (ResizeBilinear): /job:localhost/replica:0/task:0/device:GPU:0
inference/resize_53/ResizeBilinear: (ResizeBilinear): /job:localhost/replica:0/task:0/device:GPU:0
inference/resize_55/ResizeBilinear: (ResizeBilinear): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.018023: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/Pad: (Pad): /job:localhost/replica:0/task:0/device:GPU:0
inference/FNconcat_out_3: (ConcatV2): /job:localhost/replica:0/task:0/device:GPU:0
inference/concat_15: (ConcatV2): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/Pad: (Pad): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/linearout: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/Greater: (Greater): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/Cast: (Cast): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_1_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/Pad: (Pad): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.018197: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/linearout: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/Greater: (Greater): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/Cast: (Cast): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.018312: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/linearout: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.018461: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/Greater: (Greater): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.018626: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/Cast: (Cast): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.018771: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.018921: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.019070: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.019187: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.019335: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.019491: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_3_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.019642: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/Pad: (Pad): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.019811: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.019995: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/linearout: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.020164: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/Greater: (Greater): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.020311: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/Cast: (Cast): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.020470: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.020621: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.020758: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.020909: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.021049: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.021215: I tensorflow/core/common_runtime/placer.cc:54] inference/Lastconv_4_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.021354: I tensorflow/core/common_runtime/placer.cc:54] inference/strided_slice_68: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.021506: I tensorflow/core/common_runtime/placer.cc:54] inference/strided_slice_69: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.021681: I tensorflow/core/common_runtime/placer.cc:54] inference/split_6: (Split): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.021780: I tensorflow/core/common_runtime/placer.cc:54] inference/strided_slice_70: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.021920: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_142: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.022099: I tensorflow/core/common_runtime/placer.cc:54] inference/lstm_layer1h1_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.022266: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_143: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.022418: I tensorflow/core/common_runtime/placer.cc:54] inference/strided_slice_71: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.022560: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_144: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.022727: I tensorflow/core/common_runtime/placer.cc:54] inference/lstm_layer1h2_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.022929: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_145: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.023080: I tensorflow/core/common_runtime/placer.cc:54] inference/strided_slice_72: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.023236: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_146: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.023368: I tensorflow/core/common_runtime/placer.cc:54] inference/lstm_layer1h3_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.023516: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_147: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.023680: I tensorflow/core/common_runtime/placer.cc:54] inference/strided_slice_73: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.023832: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_148: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.023949: I tensorflow/core/common_runtime/placer.cc:54] inference/lstm_layer1h4_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.024321: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_149: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.024498: I tensorflow/core/common_runtime/placer.cc:54] inference/strided_slice_74: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.024629: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_150: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.024802: I tensorflow/core/common_runtime/placer.cc:54] inference/lstm_layer1i1_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.024983: I tensorflow/core/common_runtime/placer.cc:54] inference/add_74: (Add): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.025124: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_151: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.025239: I tensorflow/core/common_runtime/placer.cc:54] inference/strided_slice_75: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.025329: I tensorflow/core/common_runtime/placer.cc:54] inference/mul_152: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.025544: I tensorflow/core/common_runtime/placer.cc:54] inference/lstm_layer1i2_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_2_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/Pad: (Pad): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/linearout: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/Greater: (Greater): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/Cast: (Cast): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2019-11-04 10:07:54.025742: I tensorflow/core/common_runtime/placer.cc:54] inference/add_75: (Add): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_3_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/Pad: (Pad): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/Conv2D: (Conv2D): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/linearout: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/Greater: (Greater): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/Cast: (Cast): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/mul_1: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/mul_2: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/mul_3: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
inference/Lastconv_4_3/add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
inference/strided_slice_68: (StridedSlice): /job:localhost/replica:0/task:0/device:GPU:0

@karllessard
Copy link
Collaborator

Ok, it looks to me that you are logging device placement. This is normally controlled in Java by setting the log_device_placement field of the config proto to true and passing this proto in the Session constructor.

I would have expect the default value to be false, are you passing such proto at the construction of your sessions?

@lucaro
Copy link
Author

lucaro commented Nov 5, 2019

Oops, the logDevicePlacement field was indeed set to true, must have accidentally autocompleted the wrong statement. After removing that statement, most, but not all of the log output disappears. The remiander looks like this:

2019-11-05 09:24:16.516764: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/C:/Users/Lucaro/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.5.1/8c3492f7662fa1cbf8ca76a0f5eb1146f7725acd/protobuf-java-3.5.1.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2019-11-05 09:24:17.684118: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-11-05 09:24:17.698957: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-11-05 09:24:17.919471: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: Quadro T2000 major: 7 minor: 5 memoryClockRate(GHz): 1.785
pciBusID: 0000:01:00.0
2019-11-05 09:24:17.919615: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2019-11-05 09:24:17.961986: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2019-11-05 09:24:18.018428: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll
2019-11-05 09:24:18.040931: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll
2019-11-05 09:24:18.149269: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll
2019-11-05 09:24:18.212825: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll
2019-11-05 09:24:18.387853: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2019-11-05 09:24:18.388332: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-11-05 09:24:19.179044: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-05 09:24:19.179143: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2019-11-05 09:24:19.179192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2019-11-05 09:24:19.179951: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2048 MB memory) -> physical GPU (device: 0, name: Quadro T2000, pci bus id: 0000:01:00.0, compute capability: 7.5)
2019-11-05 09:25:30.978231: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2019-11-05 09:25:32.156321: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2019-11-05 09:25:36.646235: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows
Relying on driver to perform ptx compilation. This message will be only logged once.
2019-11-05 09:25:39.114125: W tensorflow/core/common_runtime/bfc_allocator.cc:239] Allocator (GPU_0_bfc) ran out of memory trying to allocate 563.21MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-05 09:25:39.115365: W tensorflow/core/common_runtime/bfc_allocator.cc:239] Allocator (GPU_0_bfc) ran out of memory trying to allocate 563.21MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.

The warnings look like they probably warrant their own issue at another time, but the rest looks like regular operation log, the level of which should be configurable somehow, right?

@karllessard
Copy link
Collaborator

That I personally don't know, never tried to do it before. The messages you are showing are at the INFO level and not debug traces, meaning that the developer was expecting them to be shown.

I'll investigate a bit more how to turn those off but just to validate one last thing, are you in Android or just using the standard JDK?

@Craigacp
Copy link
Collaborator

Craigacp commented Nov 5, 2019

Looks like the protobuf warnings should go away if we use the latest protobuf release (see protocolbuffers/protobuf#3781).

@lucaro
Copy link
Author

lucaro commented Nov 5, 2019

Thanks! Regarding my environment, I'm currently using OpenJDK 12.

@karllessard
Copy link
Collaborator

Hi @lucaro , sorry I didn’t came back sooner on this. After clearing device placement logs and upgrading protobuf version, is your execution still too verbose?

@lucaro
Copy link
Author

lucaro commented Nov 20, 2019

The log output shown above (#3 (comment)) is after the removal of the device placement log flag. I'm using the most recent available tensorflow version mavencentral offers, which is 1.15. I did not explicitly add protobuf to the project dependencies, the full list of dependencies is this: https://github.com/lucaro/VideoSaliencyFilter/blob/master/build.gradle
Do you think adding a protobuf dependency would help in the matter and if so, which one would that be?

@karllessard
Copy link
Collaborator

karllessard commented Dec 2, 2019

Hi @lucaro , I'm really sorry for the late reply, I wanted to come back on this issue as soon as I find some time, which only happens now...

Digging in TF source code, I found out that there is an environment variable that set the minimum log level in the C++ core: https://github.com/tensorflow/tensorflow/blob/b82ab06d4b596a4aa964b2c78de3478e0cef6dd9/tensorflow/core/platform/default/logging.cc#L153

Can you set that environment variable TF_CPP_MIN_LOG_LEVEL to something higher than 0 and try again?

@lucaro
Copy link
Author

lucaro commented Dec 2, 2019

I'm not entirely sure what causes it, but the amount of log output is different when I run the application from the IDE or from a jar. With the jar, the only remaining output is as follows, the environment variable you mentioned does not appear to make any difference to it.

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/C:/Users/Lucaro/Workspace/VideoSaliencyFilter/build/libs/vsf.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2019-12-02 13:29:00.885341: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

@karllessard
Copy link
Collaborator

OK so from the JAR, this kind of log seems normal to me (the protobuf warning might be avoided by overloading the version of their artifact, like @Craigacp mentioned, but it is not something we need to worry about).

Which IDE do you use? Are you sure the environment variable I've mentioned is not set somewhere in the execution settings of your app?

@lucaro
Copy link
Author

lucaro commented Dec 3, 2019

It's not only the protobuf warning, the last line comes from TensorFlow.
I'm using IntelliJ and did not set any other parameters for execution.

@karllessard
Copy link
Collaborator

It's not only the protobuf warning, the last line comes from TensorFlow.

The line warning you that TF is not optimized to support your CPU instruction set has always been there so I don't see it as an issue.

@lucaro
Copy link
Author

lucaro commented Dec 4, 2019

Well, at this point it's more of a philosophical question I guess but shouldn't the log output of a library, especially the one concerned with information rather than errors, be controllable by the main application using the library?

@Craigacp
Copy link
Collaborator

Craigacp commented Dec 4, 2019

We might be able to call the appropriate method in the guts of the C++ library to turn off the logger, but really the C++ library should be refactored a bit to expose better methods for controlling the logging behaviour.

@karllessard
Copy link
Collaborator

karllessard commented Dec 4, 2019

Hi @lucaro , I was able to turn the logs off by setting the configuration variable I gave you in my previous post. Here is my IntelliJ settings for running my app:

image

Output with value set to 1:

Extracting 60000 images of 28x28 from train-images-idx3-ubyte.gz
Extracting 60000 labels from train-labels-idx1-ubyte.gz
Extracting 10000 images of 28x28 from t10k-images-idx3-ubyte.gz
Extracting 10000 labels from t10k-labels-idx1-ubyte.gz
Accuracy: 0.9071

Output with value unset (or set to 0):

Extracting 60000 images of 28x28 from train-images-idx3-ubyte.gz
Extracting 60000 labels from train-labels-idx1-ubyte.gz
Extracting 10000 images of 28x28 from t10k-images-idx3-ubyte.gz
Extracting 10000 labels from t10k-labels-idx1-ubyte.gz
2019-12-04 10:19:53.744911: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-04 10:19:53.764567: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f93b14169f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2019-12-04 10:19:53.764602: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
Accuracy: 0.9071

Please confirm that is works on you side as well and if it does, we'll close this issue. I guess it is still possible to make it more user friendly by setting that environment variable on our side.

@lucaro
Copy link
Author

lucaro commented Dec 5, 2019

I can confirm that this setting changes the amount of log output.

@karllessard
Copy link
Collaborator

Thanks @lucaro,

changes the amount of log output

do you still see "undesirable" messages in the logs after setting this? (i.e. anything that is just informative?)

@lucaro
Copy link
Author

lucaro commented Dec 6, 2019

No, after setting this, only the protobuf warning remains. Since there is no way to set this from within the program though, this is only a solution for this particular instance and not for the log management in general. That would however rather be a feature request towards the C++ part of TensorFlow, right?

@karllessard
Copy link
Collaborator

Yes, I think we need to add an endpoint to set the value of this variable in a friendly matter, we’ll take a look at it.

For the protobuf warning, I’ll come back to you as soon as I know for sure it is a C++ runtime issue, which is my feeling right now

@Craigacp
Copy link
Collaborator

Craigacp commented Dec 7, 2019

The IllegalReflectiveAccess warning is because the protobuf jar is messing with fields it shouldn't be. Its emitted on Java 9 and newer. Updating to a newer version will probably fix it.

@karllessard
Copy link
Collaborator

Oh, of course the warning to come from a Java artifact since it is about an illegal reflective access... But I can't reproduce it on my side, even if I include an additional dependency to org.tensorflow:proto and I'm using JDK 11.

@lucaro , are you sure that this warning does not come from another artifact that your application might depend on?

@lucaro
Copy link
Author

lucaro commented Dec 9, 2019

TensorFlow is the only library I'm using in this particular project which has a dependency to protobuf. In case it helps, this is the complete list of dependencies: https://github.com/lucaro/VideoSaliencyFilter/blob/master/build.gradle

@Craigacp
Copy link
Collaborator

Craigacp commented Dec 10, 2019

Looks like it's fixed by using protobuf-java 3.7.0 or newer (https://github.com/protocolbuffers/protobuf/releases/tag/v3.7.0). The 1.15.0 proto artifact depends on protobuf-java 3.5.1. @karllessard, @sjamesr, where does the org.tensorflow:proto artifact come from? Is it one of the things we'll produce here, or is it produced by Google?

@sjamesr
Copy link
Contributor

sjamesr commented Dec 10, 2019

Google has been doing the releases up to this point, I don't know exactly what the plan is for this artifact in the future.

@karllessard
Copy link
Collaborator

An new issue has been created for protobuf support in new TF Java: #21

deansher pushed a commit to deansher/java that referenced this issue Mar 3, 2021
Sync with master tensorflow on upstream
@silvanheller
Copy link

Bumping this issue, is there any way to set the log level within the Java Code now that the corresponding issue has been resolved?

@Craigacp
Copy link
Collaborator

Craigacp commented Mar 3, 2022

We spent some time looking into exporting TF logging messages into Java so they can be logged using a Java logging framework, but unfortunately that effort stalled as the TF log handler doesn't behave correctly - tensorflow/tensorflow#44995, #345.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants