[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aborted (core dumped) in tf.raw_ops.ResourceSparseApplyAdagrad/tf.raw_ops.ResourceSparseApplyAdagradDA/tf.raw_ops.ResourceSparseApplyAdagradV2 #69284

Open
x0w3n opened this issue Jun 6, 2024 · 1 comment
Assignees
Labels
comp:ops OPs related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.16 type:bug Bug

Comments

@x0w3n
Copy link
x0w3n commented Jun 6, 2024

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

tf 2.16

Custom code

Yes

OS platform and distribution

Linux Ubuntu 22.04.3 LTS (x86_64)

Mobile device

No response

Python version

3.9.13

Bazel version

No response

GCC/compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

On specific input, tf.raw_ops.ResourceSparseApplyAdagrad/tf.raw_ops.ResourceSparseApplyAdagradDA/tf.raw_ops.ResourceSparseApplyAdagradV2 triggers "Aborted (core dumped)". We ran the code on colab's latest TensorFlow Nightly, which also triggers the crash. The cause of the crash may be the grad parameter.

Standalone code to reproduce the issue

import tensorflow as tf

var = tf.Variable([1.0, 2.0, 3.0])
accum = tf.Variable([0.1, 0.2, 0.3], dtype=tf.complex64) 
accum2 = tf.Variable([0, 1.2, -3.3], dtype=tf.complex64) 
lr = 0.01
grad = tf.constant([0.1, 0.2, 0.3])
momentum = 0.9

# crash
tf.raw_ops.ResourceSparseApplyAdagrad(
    var=var.handle,
    accum=accum.handle,
    lr=lr,
    grad=grad,
    indices=tf.constant([1,2,3]),
    use_locking=False,
)


# crash only when gpu is available 
# tf.raw_ops.ResourceSparseApplyAdagradDA(var=var.handle,
#                     gradient_accumulator=accum.handle,
#                     gradient_squared_accumulator=accum2.handle,
#                     grad=grad, indices=tf.constant([1,2,3]), lr=lr,
#                     l1=lr, l2=lr, global_step=1000,
#                     use_locking=False)

# crash
# tf.raw_ops.ResourceSparseApplyAdagradV2(var=var.handle,
#                     accum=accum.handle,
#                     epsilon = 0.9,
#                     grad=grad, indices=tf.constant([1,2,3]), lr=lr,
#                     use_locking=False,update_slots=False)

Relevant log output

2024-06-06 01:14:57.455749: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-06-06 01:14:57.489757: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-06-06 01:14:58.552846: F tensorflow/core/framework/tensor.cc:844] Check failed: dtype() == expected_dtype (8 vs. 1) float expected, got complex64
Aborted (core dumped)
@Venkat6871
Copy link

Hi @x0w3n ,

  • I was able to reproduce the issue on Colab using TF v2.15 and TF-nightly ,Please find the gist here for reference.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:ops OPs related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.16 type:bug Bug
Projects
None yet
Development

No branches or pull requests

2 participants