You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it normal behavior to access memory locations using tensor.accessor beyond those that are occupied by the tensor itself?
Following is a simple example of a kernel that instantiates an output tensor of size {5, 5, 5} and then tries to access an element out of the tensor bounds, i.e. {15, 20, 30}.
I would have expected this to cause an illegal memory access error, but it doesn't and the code runs without any errors. Following is a test script invoking the kernel:
Hi @zafstojano,
Thanks for reporting this issue! After our investigation, we confirm that this is an expected behavior. Neuron doesn't perform bounds checking on the TensorAccessor, to maximize performance. The responsibility for ensuring that indices are within the valid range falls on the programmer. Note that this behavior is similar to that in PyTorch TensorAccessor. We will update our public documentation to reflect this clearly. We will close this issue, but feel free to reopen if you have any further questions.
Hello!
Is it normal behavior to access memory locations using
tensor.accessor
beyond those that are occupied by the tensor itself?Following is a simple example of a kernel that instantiates an output tensor of size {5, 5, 5} and then tries to access an element out of the tensor bounds, i.e. {15, 20, 30}.
I would have expected this to cause an illegal memory access error, but it doesn't and the code runs without any errors. Following is a test script invoking the kernel:
Which produces the following logs:
As you can see, I can successfully read and write to memory locations outside of the output tensor.
Is this behavior expected?
The text was updated successfully, but these errors were encountered: