-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TF2.0] Change default types globally #26033
Comments
@alextp , @reedwm Hello! By tensor1 = tf.Variable(v)
tf.set_default_float(tf.float64)
tensor2 = tf.Variable(v)
tensor3 = tf.Variable(v, dtype=tf.float16)
# tensor1 has fp32
# tensor2 has fp64
# tensor3 has fp16 |
No current work that I am aware of.
…On Thu, Apr 4, 2019 at 3:14 AM Artem Artemev ***@***.***> wrote:
@alextp <https://github.com/alextp>, @reedwm <https://github.com/reedwm>
Hello everyone! ooc, does anyone work on this feature, or it is postponed
until better times? :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#26033 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAATxcFZOt_YW-lGXxNsF4ReoXpzERWxks5vddCUgaJpZM4bN3xC>
.
--
- Alex
|
You're right, we cannot use floatx for backwards compatibility, since currently it does not affect the dtypes of This proposal sounds reasonable. It would be convenient to be able to have all variables/constants in a user-chosen dtype without having to specify the
@alextp what do you think? Maybe we should wait until the mixed precision API is further along, so at least we could address (3) with more certainty. |
I think this feature will be very useful for the upcoming tf.keras mixed precision API, so I will revisit this, at least for floating-point types. I think it makes sense for @awav, in your example with variables, if Also, IMO, |
Overall looks good. Ideally floatx can just call set_default_float for correctness. |
We would also be very interested in this features. Since we use TF to do likelihood fits requiring float64 (zfit) and allow users to specify their model with TF themselves, we often run into problems where users create (because of the default) a float32 tensor, conflicting with the other float64 tensors. Mostly for TF unaware users a quite annoying thing (and hard for us to catch and explicitly write what to change). Currently, we even started wrapping some of the TF functions to avoid this problem, so a global fix would be highly appreciated (and we may can help implementing it oc). |
@mayou36 thank you for the feedback. I plan to start a design doc in the upcoming weeks. |
@reedwm ,
I agree with it, this is absolutely logical. |
@reedwm , any updates on that? |
Not yet sorry! I am currently busy with other tasks, but I hope to have a design doc fairly soon. |
Unfortunately still no update :(. I am working on some mixed precision tasks at the moment. Once the Keras mixed precision API is more complete I can get to this. |
@reedwm any news? I'd be very keen for this to make it into tensorflow - the lack of configurable default dtypes is one of our biggest pain points with tensorflow and makes life difficult and cumbersome for both the developers and users of downstream libraries (such as GPflow)! |
Has there been progress? |
Sorry no updates yet :( |
Any updates on this? I am very excited to see this feature in tensorflow. |
tf.keras.backend.set_floatx seems to be doing this. |
Not exactly, it works for Keras only and doesn't have any effect on the rest of TensorFlow. |
This would be really helpful. Writing float16 code (if not keras) is filled with boilerplate crap. |
This could be quite helpful, but sorry to see no update |
Three years, and still waiting. |
@reedwm what is the status of this? Any progress, anything we can help with? |
I'd like to chime in as well that this would be a very useful feature. It can be very cumbersome to cast every tensor to the correct dtype manually, especially when using existing data. Let us know how we might be able to help, as a community! |
Just to keep the dream alive... yes this would be really useful! |
Does it maybe increase motivation for this feature by pointing out that PyTorch has it? :) |
Hi! Any updates regarding this feature? This would indeed be very helpful! :) |
Hello everyone,
I made the same request a while ago at tensorflow/community. The similar question was raised before at tensorflow/tensorflow#9781, where maintainers argued that GPU is much faster on float32, default type cannot (should not) be changed because of backwards compatibility reasons and cetera.
The thing is that the precision is very important for algorithms where operations like cholesky, solvers and etc. are used. This becomes very tedious to specify type everywhere, it gets even worse when you start using other frameworks or small libraries which follow the standard type settings and sometimes they become useless, just because type incompatibilities. The policy of "changing types locally solves your problems" becomes cumbersome.
It would be great to have methods
tf.set_default_float
andtf.set_default_int
in TensorFlow 2.0 and I believe that such a small change will make TensorFlow more user friendly.Kind regards,
Artem Artemev
The text was updated successfully, but these errors were encountered: