in

Using Talos for automatic neural network hyperparameter tuning


I am trying out Talos for neural network hyperparameter tuning. I’m using an example I found online: https://www.analyticsvidhya.com/blog/2021/05/neural-network-and-hyperparameter-optimization-using-talos/

Here is my model:

#Define GRU cell with drop-out with parameters to be optimized
vocab_size = 1000
max_length = 100
initializer = tf.keras.initializers.constant(-1.)
#Model definition + optimization for dropout rate

def GRU_drop(x_train,Y_train,X_val,Y_val,param):
    
    GRU_model_drop = tf.keras.Sequential([
    Embedding(vocab_size,32, input_length = max_length),
    GRU(32,return_sequences=True, recurrent_dropout=param['rdropout'],dropout=param['dropout'], use_bias = True, bias_initializer = initializer, kernel_regularizer = param['regularizer'],recurrent_regularizer = param['regularizer'],bias_regularizer=param['regularizer']),
    Dense(1, activation=param['fcactivation'])])

    
    GRU_model_drop.compile(optimizer=param['optimizer'], loss="binary_crossentropy", metrics=['accuracy'])

    #training the Dropout GRU model
    earlystop = callbacks.EarlyStopping(monitor ="val_loss",
                                            mode ="min", patience = 3,
                                            restore_best_weights = True)

    history_gru_drop = GRU_model_drop.fit(padded, training_labels_final, epochs=20, batch_size = param['batchsize'], validation_data=(testing_padded, testing_labels_final),callbacks =[earlystop])

During the Scan phase I run into an error that I don’t understand:
#Scan gives experiments of all hyperparameter combinations

from talos import Scan

h = Scan(x = padded, 
         y = training_labels_final,
         model=GRU_drop,
         params=param,
         print_params=True, 
         reduction_metric="val_loss",
         experiment_name = "SpamGRU"
        )

The error gives:

{'batchsize': 32, 'dropout': 0, 'fcactivation': 'relu', 'optimizer': 'Adam', 'rdropout': 0, 'regularizer': 'l1'}
Epoch 1/20
49/49 [==============================] - 6s 52ms/step - loss: 14.4929 - accuracy: 0.4751 - val_loss: 12.3564 - val_accuracy: 0.5306
Epoch 2/20
49/49 [==============================] - 2s 40ms/step - loss: 12.2631 - accuracy: 0.4751 - val_loss: 10.5452 - val_accuracy: 0.5306
Epoch 3/20
49/49 [==============================] - 2s 39ms/step - loss: 10.8096 - accuracy: 0.4751 - val_loss: 9.4405 - val_accuracy: 0.5306
Epoch 4/20
49/49 [==============================] - 2s 39ms/step - loss: 9.9871 - accuracy: 0.4751 - val_loss: 8.8905 - val_accuracy: 0.5306
Epoch 5/20
49/49 [==============================] - 2s 40ms/step - loss: 9.6463 - accuracy: 0.4751 - val_loss: 8.7151 - val_accuracy: 0.5306
Epoch 6/20
49/49 [==============================] - 2s 41ms/step - loss: 9.5179 - accuracy: 0.4751 - val_loss: 8.6074 - val_accuracy: 0.5306
Epoch 7/20
49/49 [==============================] - 2s 41ms/step - loss: 9.4163 - accuracy: 0.4751 - val_loss: 8.5101 - val_accuracy: 0.5306
Epoch 8/20
49/49 [==============================] - 2s 40ms/step - loss: 9.3205 - accuracy: 0.4751 - val_loss: 8.4155 - val_accuracy: 0.5306
Epoch 9/20
49/49 [==============================] - 2s 40ms/step - loss: 9.2263 - accuracy: 0.4751 - val_loss: 8.3213 - val_accuracy: 0.5306
Epoch 10/20
49/49 [==============================] - 2s 40ms/step - loss: 9.1322 - accuracy: 0.4751 - val_loss: 8.2273 - val_accuracy: 0.5306
Epoch 11/20
49/49 [==============================] - 2s 40ms/step - loss: 9.0382 - accuracy: 0.4751 - val_loss: 8.1332 - val_accuracy: 0.5306
Epoch 12/20
49/49 [==============================] - 2s 40ms/step - loss: 8.9441 - accuracy: 0.4751 - val_loss: 8.0392 - val_accuracy: 0.5306
Epoch 13/20
49/49 [==============================] - 2s 40ms/step - loss: 8.8500 - accuracy: 0.4751 - val_loss: 7.9451 - val_accuracy: 0.5306
Epoch 14/20
49/49 [==============================] - 2s 39ms/step - loss: 8.7559 - accuracy: 0.4751 - val_loss: 7.8510 - val_accuracy: 0.5306
Epoch 15/20
49/49 [==============================] - 2s 40ms/step - loss: 8.6618 - accuracy: 0.4751 - val_loss: 7.7569 - val_accuracy: 0.5306
Epoch 16/20
49/49 [==============================] - 2s 40ms/step - loss: 8.5678 - accuracy: 0.4751 - val_loss: 7.6628 - val_accuracy: 0.5306
Epoch 17/20
49/49 [==============================] - 2s 39ms/step - loss: 8.4737 - accuracy: 0.4751 - val_loss: 7.5687 - val_accuracy: 0.5306
Epoch 18/20
49/49 [==============================] - 2s 40ms/step - loss: 8.3796 - accuracy: 0.4751 - val_loss: 7.4747 - val_accuracy: 0.5306
Epoch 19/20
49/49 [==============================] - 2s 40ms/step - loss: 8.2855 - accuracy: 0.4751 - val_loss: 7.3807 - val_accuracy: 0.5306
Epoch 20/20
49/49 [==============================] - 2s 40ms/step - loss: 8.1915 - accuracy: 0.4751 - val_loss: 7.2865 - val_accuracy: 0.5306
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
~AppDataLocalTemp/ipykernel_17864/2617143155.py in <module>
      3 from talos import Scan
      4 
----> 5 h = Scan(x = padded, 
      6          y = training_labels_final,
      7          model=GRU_drop,

~anaconda3envstf-gpu-copylibsite-packagestalosscanScan.py in __init__(self, x, y, params, model, experiment_name, x_val, y_val, val_split, random_method, seed, performance_target, fraction_limit, round_limit, time_limit, boolean_limit, reduction_method, reduction_interval, reduction_window, reduction_threshold, reduction_metric, minimize_loss, disable_progress_bar, print_params, clear_session, save_weights)
    194         # start runtime
    195         from .scan_run import scan_run
--> 196         scan_run(self)

~anaconda3envstf-gpu-copylibsite-packagestalosscanscan_run.py in scan_run(self)
     24         # otherwise proceed with next permutation
     25         from .scan_round import scan_round
---> 26         self = scan_round(self)
     27         self.pbar.update(1)
     28 

~anaconda3envstf-gpu-copylibsite-packagestalosscanscan_round.py in scan_round(self)
     17     # fit the model
     18     from ..model.ingest_model import ingest_model
---> 19     self.model_history, self.round_model = ingest_model(self)
     20     self.round_history.append(self.model_history.history)
     21 

TypeError: cannot unpack non-iterable NoneType object

I don’t understand what it isn’t able to unpack exactly?

Any hints welcome.



Source: https://stackoverflow.com/questions/70556233/using-talos-for-automatic-neural-network-hyperparameter-tuning

Turkey Reveals Scheme That Encourages the Conversion of Gold Deposits Into Lira Time Deposits thumbnail

Turkey Reveals Scheme That Encourages the Conversion of Gold Deposits Into Lira Time Deposits

Go library for writing standalone Map/Reduce jobs or for use with Hadoop’s streaming protocol