-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running L-BFGS-B optimizer in TF2 #48167
Comments
Adding the contributions welcome label to this issue for further investigation by the community. If you are interested in working on this issue, please leave a comment and I will assign it to you. Thanks! |
@nikitamaia Probably we could route this in the ecosystem (TF probability). See tensorflow/probability#565 |
Hi, can I contribute to this issue ? |
It looks like that this issue don't have any repo.I am creating one |
Any news here? @vulkomilev Are you still active on this? |
yes
На пн, 14.06.2021 г. в 22:08 ч. bhack ***@***.***> написа:
… Any news here? @vulkomilev <https://github.com/vulkomilev> Are you still
active on this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#48167 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ATA3WEP6FXW42OYBBMR4YX3TSZHRFANCNFSM42AIDTMQ>
.
|
@bhack can you help with me with the organization of the project? |
Any update on this? I am also looking forward to L-BFGS in TF2. |
I have a problem with the organization of the project. The Tensorflow
project is huge and very complex and I am waiting for help
|
Hi @vulkomilev , how about the progress of LBFGS on TF2.x? Look forward to your contriubtion. |
For those who needs L-BFGS in TF 2.x, I implemented a TensorFlow interface for tfp.optimizer.lbfgs_minimize: |
@lululxvi , thanks a lot Lu, great job! |
@lululxvi How to use this interface for tfp.optimizer.lbfgs_minimize in my own code with TF 2.x? |
import deepxde as dde
net = ... # your tf.keras.Model
trainable_variables = net.trainable_variables # the network weights and biases
def build_loss(): # no arguments
loss = ... # compute the loss for the network
return loss
dde.optimizers.tfp_optimizer.lbfgs_minimize(trainable_variables, build_loss)
|
@JHvdM1959 Is this still an issue? |
@sushreebarsa Yes, it is still an issue. L-BFGS in tfp helps, but it is not convenient to use, and we have to add an interface, as I discussed above #48167 (comment) Ideally, a direct support would be better, something like L-BFGS in PyTorch https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html |
Probably we could a TF counterpart like: https://pytorch.org/docs/stable/_modules/torch/optim/lbfgs.html#LBFGS But I think it Is better to open a new ticket in Keras directly. You could ask to reopen this or open a new one: |
@lululxvi I tried to use the interface from DeepXDE but encounter an error. While I understand the error, I'm not sure how to resolve it. Any help in this regards is appreciated. I'm using:
The error: It seems like the function only accepts numpy arrays but I have a TF symbolic tensor. Any way to resolve this? |
@rohitvuppala The error doesn't seem from DeepXDE, as there is only one place using NumPy |
@lululxvi Thank you for the quick response!. It seems to be an issue with numpy as mentioned here - #56527 (comment). |
Hi folks, I was also interested in using L-BFGS (and other batch solver) from
I set up the package to be as drag & drop info TF2 as possible, since I was myself alternating between solvers like ADAM and L-BFGS depending on the problem size. Extended Footnote on the
|
@mbhynes this is excellent, thank you so much for sharing! Exactly what I've been looking for over the past few months. |
Can this interface return the value of the target function during iteration? If possible, how to achieve them? |
No. But you can simply add one line in the code to return the loss. https://github.com/lululxvi/deepxde/blob/master/deepxde/optimizers/tensorflow/tfp_optimizer.py |
@lululxvi I tried your solution however the last code did not run correctly. dde.optimizers.tensorflow_compat_v1.scipy_optimizer.ScipyOptimizerInterface(self.loss, |
@jesusgl86 Hi, I am trying to use L-BFGS-B with the PINNs code from there github using the same method with their class that they used. I tried @lululxvi lib but always gives me errors while using inside or outside the class, is there any way to make it work? as i need to use two optimizers? @lululxvi @jesusgl86 import sys import tensorflow as tf np.random.seed(1234) class PhysicsInformedNN:
if name == "main":
|
@RoboticAutonomy Example pip install tf-nightly It will give an error that one command was not changed which is tf.contrib.opt.ScipyOptimizerInterface To fix this then do the following: Have this libraries imported just in case import tensorflow as tf Then replace:
With: self.optimizer = dde.optimizers.tensorflow_compat_v1.scipy_optimizer.ScipyOptimizerInterface(self.loss, Hope this helps. |
update: @jesusgl86 Thank you so much for help. It works now well, but unfortunately the results of the training are completely wrong, and the training finished in 0.5 sec. |
@RoboticAutonomy Let mi post the AC example setup import sys import tensorflow as tf np.random.seed(1234) class PhysicsInformedNN:
|
Is the problem solved? |
Hey all, It is a rather old topic, but I just want to say that I created a package from the Pi-Yueh Chuang's code. It makes the use of LBFGS very easy in tensorflow 2. Here is the link: https://github.com/mBarreau/tf2-bfgs/ Now, it resumes to create the optimizer and call it (see readme):
Hope that helps :) |
System information
Issue at hand:
Originally the optimizer based on L-BFGS-B only runs on TF1 via
self.optimizer = tf.contrib.opt.ScipyOptimizerInterface(self.loss,
method = 'L-BFGS-B',
options = {'maxiter': 50000,
'maxfun': 50000,
'maxcor': 50,
'maxls': 50,
'ftol' : 1.0 * np.finfo(float).eps})
The '.contrib' module has been left out of TF2, and so far no straightforward solution found anywhere that works well.
Reason for request:
PINN is a significant and growing development for science / engineering applications. Hence not having
this functionality implemented in a usable and accessible way in TF2 is an issue.
Hence in short, this is a feature request:
Please enable straightforward to use implementation of the Broyden - Fletcher - Goldfarb - Shanno optimization into TF2.
Ideally, accessible through the Keras framework (be it functional API or not).
Thanks and best regards,
Jan van de Mortel
The text was updated successfully, but these errors were encountered: