-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor shape error using TF 2.8.0 with XLA enabled #54973
Comments
Hi @gadagashwini ! Could you look at this issue ?It is not replicating in Colab 2.8 version though . |
@mohantym Uh, right, we can run scripts in colab... Try this tf_280_xla_test.ipynb. Just setting |
Just verified still exists in |
|
Please make sure that this is a bug. As per our
GitHub Policy,
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:bug_template
System information
Ubuntu 20.04
TF 2.8.0
/TF 2.6.3
/tf-nightly 2.9.0-dev20220303
3.8.10
cuda-driver-dev-11-2, 11.2.152-1
,libcudnn8, 8.3.2.44-1+cuda11.5
RTX 2080 Ti, 11016MiB
/RTX 8000, 46080MiB
Describe the current behavior
When I upgrade my
TF 2.6.3 -> 2.8.0
, my daily using training script throws me out with errorMust have updates.shape = indices.shape[:batch_dim] + buffer_shape[num_index_dims:], got updates.shape: [32], indices.shape: [320,2], buffer_shape: [32,10], num_index_dims: 2, and batch_dim: 1
, with settingTF_XLA_FLAGS="--tf_xla_auto_jit=2"
flag, which used to work well inTF 2.6.3
.Describe the expected behavior
Expect still working well like old
TF 2.6.3
time.Contributing
Standalone code to reproduce the issue
This is my standalone code for reproducing, that simplified most details:
Run test using
TF 2.6.3
:Run test using
TF 2.8.0
/tf-nightly 2.9.0-dev20220303
:Maybe some part of this script is not needed for this reproduce, not sure. I think something went wrong with
margin = 0.04 * (feature_norm - 10) + 10.0
, but cannot tell what exactly happens here... Please take a check.The text was updated successfully, but these errors were encountered: