You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the approximator only stores one set of losses for each training epoch. One has to implement and explicitly call a custom callback (based on Keras' base Callback class) in order to visualize the finer-grained loss trajectory across more training steps.
Here is an example of such workaround implementation where the loss function is recorded across all training steps:
classDetailedLossTrajectory(keras.callbacks.Callback):
def__init__(self):
super().__init__()
self.training_losses= []
self.validation_losses= []
defon_train_batch_end(self, batch, logs=None):
# 'logs' is a dictionary containing loss and other metricstraining_loss=logs.get('loss')
self.training_losses.append(training_loss)
defon_test_batch_end(self, batch, logs=None):
validation_loss=logs.get('loss')
self.validation_losses.append(validation_loss)
It needs to be instantiated and called upon training:
Ideally, we should provide an interface that allows the user to access a more detailed loss trajectory by default, and provide them the ability to control the level of detail (i.e., recording loss every n-th training steps).
The text was updated successfully, but these errors were encountered:
Currently, the approximator only stores one set of losses for each training epoch. One has to implement and explicitly call a custom callback (based on Keras' base
Callback
class) in order to visualize the finer-grained loss trajectory across more training steps.Here is an example of such workaround implementation where the loss function is recorded across all training steps:
It needs to be instantiated and called upon training:
Ideally, we should provide an interface that allows the user to access a more detailed loss trajectory by default, and provide them the ability to control the level of detail (i.e., recording loss every n-th training steps).
The text was updated successfully, but these errors were encountered: