-
Notifications
You must be signed in to change notification settings - Fork 435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPFlow-2.0 - issue with default_float and likelihood variance #1244
Comments
Thanks for your well written question! I think I might have a very simple answer: you may have been bitten by a GPflow 2.0 gotcha. The solution is to replace The behaviour of gpflow's parameters has changed with version 2.0 to match that of tensorflow Variables. @awav , could you point @daragallagher to the gpflow2.0 gotchas notebook, please? |
@jameshensman, yes, that's the case. Could you repost this question on StackOverflow with |
Thanks for the quick response. I thought it might be something like that. @awav I had read (quickly) through that intro notebook but missed the assign recommendation. Nearly all of the sample code that appears in google search results is understandably for gpflow 1.x so I think this gotcha may catch quite a few noobs like myself. It's probably not practical to support the older style? Anyway, in case it helps others, I've posted this issue as a SO question here: https://stackoverflow.com/questions/60055919/gpflow-2-0-issue-with-default-float-and-likelihood-variance |
@awav I also see now that my suggestion that gpflow add code to convert to the default float for the sake of API consistency will not really help since the issue really originates in tensorflow itself and has infected modules built on top of tensorflow like tensorflow-probability. So there is little that can be done in gpflow to address the consistency issues this causes. I see you've already raised this as a tensorflow issue here: tensorflow/tensorflow#26033 Here is an example of the sort of inconsistency that has bothered me, taken from one of the example notebooks;
If using float64, one argument, adaptation_rate, does not require a cast, while the other, target_accept_prob, requires one. This makes (IMHO) the API quite user hostile. Either the user must defensively wrap every float argument in a cast, or use a debugger to locate the source of a float argument that eventually raise an "expected to be a double tensor but is a float tensor" exception. If tensorflow were to provide a default float setting, then this inconsistency would not arise. Even my initial example would have worked fine as tf.fill would have defaulted to float64. I think many users of tensorflow are focused on deep learning and so never look beyond float32 which I suspect is the reason the issue hasn't been prioritized. This is unfortunate for libraries that build on tensorflow where the numeric properties of float64s make more sense. So I'll add a "me too" comment to tensorflow issue 26033 and close this one. |
@daragallagher thank you for such a well-written issue report! |
I am new to GP and GPFlow so excuse me if this is a silly question.
I am trying to use gpflow (2.0rc) with float64 and had been struggling to get even simple examples to work. I configure gpflow using:
I am using GPR:
And indeed if I print a summary, both parameters have dtype float64. However if I try to predict with this model, I get an error.
A debugging session lead me to the following line in gpr.py (line 88)
This creates a matrix with dtype float32 which causes the blow-up as described above.
Perhaps it should be something along the lines of:
I'm not sure - I'm not a tensorflow expert either :)
Even if this is because I'm not using the API correctly, it's quite confusing (I think) that some parameters given as simple Python floats are handled correctly but setting the likelihood variance in this way causes an exception.
My environment:
Here's a full Python script:
The text was updated successfully, but these errors were encountered: