Inconsistent regularization loss computation when used with pretrained models. #41161
Labels
comp:keras
Keras related issues
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
TF 2.11
Issues related to TF 2.11
type:bug
Bug
System information
Describe the current behavior
I am trying to use MobileNetV2 pretrained model for a simple classification task. Here I want to add regularization loss for MobileNetV2 - base model along with added FC layers. I have conducted 3 experiments as per attached colab notebook, where I get different results for the computation of regularization loss. The MobileNetV2 regularizer loss is computed twice in the model. And I dont know why it happens.
Describe the expected behavior
The computation of the loss in all the experiments should be same.
Standalone code to reproduce the issue
Colab Link : https://colab.research.google.com/drive/1CeKMIAq_g0AOKdakupKIeUjEhkL_TtI1?usp=sharing
Other info / logs
The text was updated successfully, but these errors were encountered: