-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share memory between numpy and tensorflow doesn't work #33254
Comments
@oanush Sorry I mistakenly write the code, it should be The following code should work import tensorflow as tf
import numpy as np
print(tf.__version__) # 2.0.0
tf.executing_eagerly() # True
a = tf.constant([3, 4]).cpu()
b = a.numpy()
b[0] = 1
print(a)
# [3, 4], Expected to be [1, 4] if share memory with numpy array
print(b)
# [1, 4] |
As described in https://www.tensorflow.org/tutorials/customization/basics#numpy_compatibility, they should share the underlying memorys. |
Issue replicating with TF-2.0, kindly find the gist of collab. Thanks! |
@VoVAllen,
It is evident from the Implementation of Tensor.numpy()] as well that Sharing of Underlying Memory is not Mandatory. Below sentence states that,
Considering all these observations, as per my understanding, this behavior is expected. Please let me know your thoughts. Thanks! |
Hi, Thanks for your reply. Actually I think I found the bug here. At here tensorflow/tensorflow/python/framework/ops.py Line 939 in afa2418
If it is a numpy.ndarray, why not directly return And I checked By the following code I got what I expected: import tensorflow as tf
import numpy as np
print(tf.__version__) # 2.0.0
tf.executing_eagerly() # True
a = tf.constant([3, 4]).cpu()
b = a._numpy() # Change from numpy() to _numpy()
b[0] = 1
print(a)
# [1, 4]
print(b)
# [1, 4] At least there're some inconsistency between the doc and the behavior, |
I checked some history commit (7caec68), the copy behavior seems intentional. The comment at |
At https://github.com/tensorflow/tensorflow/blame/5278b8509e2cd1b2847315db46fc0f958824cfce/tensorflow/python/framework/ops.py#L709, numpy is zero-copyed from tf. The next commit (ca1b54a?diff=split), fixed that the Later at
@superbobry Sorry to bother you. Could you comment a bit on this issue, that at what condition tf tensor can share memory with numpy? Also could you say a bit more about the reference counting problem mentioned above? Many thanks! |
NumPy arrays are mutable whereas tensors aren't, therefore Two caveats: Tensor supports the buffer interface, so you could get a readonly NumPy array via >>> t = tf.constant([42])
>>> a = np.asarray(memoryview(t))
>>> a
array([42], dtype=int32) Passing a CPU tensor directly to any NumPy APIs is zero-copy, because NumPy uses buffer interface behind the scenes (but NumPy will allocate a new array for the result): >>> np.square(t)
array([1764], dtype=int32) |
Thanks a lot for the explanation. This makes much more sense to me. |
Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template
System information
Describe the current behavior
As described in introduction, tf will try to share memory between tf and numpy when possible. However I couldn't figure out how to do this.
Describe the expected behavior
Code to reproduce the issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: