I am currently using a model from tf.keras.applications for training. And a data augmentation layer along with it. Wierdly, after I import the model from applications, the augmentation layer does not work. The augmentation layer does work before I import it. What is going on?
Also, this has only started happening recently after the new version of TF 2.8.0 was released. Before it was working all fine.
The code for the augmentation layer is
data_augmentation = tf.keras.Sequential([ tf.keras.layers.RandomFlip("horizontal_and_vertical"), tf.keras.layers.RandomRotation(0.5), ])
And I am importing the model using
base_model = tf.keras.applications.MobileNetV3Small( input_shape=(75, 50, 3), alpha=1.0, weights='imagenet', pooling='avg', include_top=False, dropout_rate=0.1, include_preprocessing=False)
Please help me understand what is going on. You can reproduce the code here on this notebook https://colab.research.google.com/drive/13Jd3l2CxbvIWQv3Y7CtryOdrv2IdKNxD?usp=sharing
I noticed same issue with tf 2.8. It can be solved by add training =True , when you test the augmentation layer:
aug = data_augmentation(image,training=True)
The reason is that the augmentation layer behaves differently in training and predicting (inference), i.e. it will do augmentation in training but do nothing in predicting. Ideally, the layer should set the training= argument smartly according the situation. Apparently, it is not smart in above code: it does not know your intention is to test the layer.
But I think you should still leave training argument as default when you build the full model, letting the augmentation layer do the job.
Answered By – Ben2018
Answer Checked By – Katrina (Easybugfix Volunteer)