[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RNN not compatible with XLA (TF backend) #18456

Open
chenmoneygithub opened this issue Jun 10, 2023 · 2 comments
Open

RNN not compatible with XLA (TF backend) #18456

chenmoneygithub opened this issue Jun 10, 2023 · 2 comments
Labels

Comments

@chenmoneygithub
Copy link
Contributor

RNN cannot be jit compiled, see error below:

Detected unsupported operations when trying to compile graph __inference_one_step_on_data_993[] on XLA_GPU_JIT: CudnnRNN (No registered 'CudnnRNN' OpKernel for XLA_GPU_JIT devices compatible with node {{node CudnnRNN}}){{node CudnnRNN}}

Nothing we can really do, but open this issue for tracking and for reference purpose.

@fchollet
Copy link
Member

We should disable jit_compile in auto mode if backend is TF and there's a LSTM or GRU layer and cuDNN is usable for them

@chenmoneygithub
Copy link
Contributor Author

@fchollet Yea, this should be a simple fix. We can probably add a util maybe_disable_xla(), and further disabling situation could be covered by the util.

I will benchmark RNN with XLA off, but it would make JAX comparison a bit odd.

@fchollet fchollet transferred this issue from keras-team/keras-core Sep 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants