[FIXED] Using RBF neural networks for testing

Issue

I am trying to run this code LINK code on may laptop, I changed these package:

  • from keras.engine.topology import Layer
  • from tensorflow.keras.models import load_model

to these package to work on my machine:

  • from tensorflow.python.keras.layers import Layer
  • from tensorflow.python.keras.models import load_model

and whenever I run a program, this part could not be executed:

#model already saved in file
from tensorflow.python.keras.models import  load_model

newmodel1= load_model("Zoghbio.h5",
                          custom_objects={'RBFLayer': RBFLayer})
print("OK")

And I get on this error: How to fix it please?

Traceback (most recent call last):
  File "c:\Users\pc\Desktop\Ali\RBFNetworks\RBF_neural_network_python-master\RBF_neuralNetwork .py", line 214, in <module>
    newmodel1= load_model("Zoghbio.h5",
  File "C:\Users\pc\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\saving\save.py", line 206, in load_model
    return saved_model_load.load(filepath, compile, options)
  File "C:\Users\pc\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\saving\saved_model\load.py", line 122, in load
    meta_graph_def = loader_impl.parse_saved_model(path).meta_graphs[0]
  File "C:\Users\pc\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 115, in parse_saved_model
    raise IOError(
OSError: SavedModel file does not exist at: Zoghbio.h5\{saved_model.pbtxt|saved_model.pb}

UPDATED error:

Save model to file C:/Users/pc/Desktop/RBFNetworks/RBF_neural_network_python-master/my_file.h5 ... Traceback (most recent call last):
  File "c:\Users\pc\Desktop\RBFNetworks\RBF_neural_network_python-master\RBF_neuralNetwork .py", line 214, in <module>
    model.save(z_model)
  File "C:\Users\pc\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\pc\AppData\Local\Programs\Python\Python38\lib\site-packages\keras\engine\base_layer.py", line 745, in get_config
    raise NotImplementedError(textwrap.dedent(f"""
NotImplementedError: 
Layer ModuleWrapper has arguments ['self', 'module', 'method_name']
in `__init__` and therefore must override `get_config()`.

Example:

class CustomLayer(keras.layers.Layer):
    def __init__(self, arg1, arg2):
        super().__init__()
        self.arg1 = arg1
        self.arg2 = arg2

    def get_config(self):
        config = super().get_config()
        config.update({
            "arg1": self.arg1,
            "arg2": self.arg2,
        })
        return config

Solution

In the code you linked in another question you have:

# # saving to and loading from file
# z_model = f"Z_model.h5"
# print(f"Save model to file {z_model} ... ", end="")
# model.save(z_model)
# print("OK")

#model already saved in file
from tensorflow.keras.models import  load_model
newmodel1= load_model("Zoghbio.h5", custom_objects={'RBFLayer': RBFLayer})
print("OK")

It is giving you an error because you are trying to load a model that is never saved before. Simply uncomment the portion above:

# saving to and loading from file
z_model = "my_file.h5"
print("Save model to file {} ... ".format(z_model), end="")
model.save(z_model)
print("OK")

# model already saved in file
from tensorflow.keras.models import  load_model
newmodel1= load_model("my_file.h5", custom_objects={'RBFLayer': RBFLayer})
print("OK")

Of course it doesn’t makes much sense to train, save and re-load the model right away. I think the author wanted to show how the model could be trained and saved, and then re-loaded avoiding training everything anew. You can load and use the saved model without training each time this way.

Update (recap of the comments):

The OP got another error indicating:

Layer ModuleWrapper has arguments ['self', 'module', 'method_name']

This can happen when keras and tf.keras imports are mixed. In this case you changed the imports from the originals because you were having issues with them, saying that they could not be resolved.

Eventually the solution was to use the original imports and fix the tensorflow installation. The optimal choice was to create a fresh environment and install Tensorflow from scratch, but the OP wasn’t using any environment.

So the solution was to simply uninstall tensorflow with:

pip uninstall tensorflow

and install tensorflow again, but the gpu version this time:

pip install tensorflow-gpu

Answered By – ClaudiaR

Answer Checked By – Marie Seifert (Easybugfix Admin)

Leave a Reply

(*) Required, Your email will not be published