Skip to content Skip to sidebar Skip to footer

How To Access All Outputs From A Single Custom Loss Function In Keras

I'm trying to reproduce the architecture of the network proposed in this publication in tensorFlow. Being a total beginner to this, I've been using this tutorial as a base to work

Solution 1:

I had the same problem trying to implement Triplet_Loss function.

I refered to Keras's implementation for Siamese Network with Triplet Loss Function but something didnt work out and I had to implement the network by myself.

defget_siamese_model(input_shape, conv2d_filters):
    # Define the tensors for the input images
    anchor_input = Input(input_shape, name="Anchor_Input")
    positive_input = Input(input_shape, name="Positive_Input")
    negative_input = Input(input_shape, name="Negative_Input")

    body = build_body(input_shape, conv2d_filters)
    # Generate the feature vectors for the images
    encoded_a = body(anchor_input)
    encoded_p = body(positive_input)
    encoded_n = body(negative_input)

    distance = DistanceLayer()(encoded_a, encoded_p, encoded_n)
    # Connect the inputs with the outputs
    siamese_net = Model(inputs=[anchor_input, positive_input, negative_input],
                        outputs=distance)
    return siamese_net

and the "bug" was in DistanceLayer Implementation Keras posted (also in the same link above).

classDistanceLayer(tf.keras.layers.Layer):
    """
    This layer is responsible for computing the distance between the anchor
    embedding and the positive embedding, and the anchor embedding and the
    negative embedding.
    """def__init__(self, **kwargs):
        super().__init__(**kwargs)

    defcall(self, anchor, positive, negative):
        ap_distance = tf.math.reduce_sum(tf.math.square(anchor - positive), axis=1, keepdims=True, name='ap_distance')
        an_distance = tf.math.reduce_sum(tf.math.square(anchor - negative), axis=1, keepdims=True, name='an_distance')
        return (ap_distance, an_distance)

When I was training the model, the loss function took only one of the vectors ap_distance or an_distance.

FINALLY, THE FIX WAS to concatenate the vectors together (along axis=1 this case) and on the loss function, take them apart:

defcall(self, anchor, positive, negative):
        ap_distance = tf.math.reduce_sum(tf.math.square(anchor - positive), axis=1, keepdims=True, name='ap_distance')
        an_distance = tf.math.reduce_sum(tf.math.square(anchor - negative), axis=1, keepdims=True, name='an_distance')
        return tf.concat([ap_distance, an_distance], axis=1)

on my custom loss:

defget_loss(margin=1.0):
    deftriplet_loss(y_true, y_pred):
        # The output of the network is NOT A tuple, but a matrix shape (batch_size, 2),# containing the distances between the anchor and the positive example,# and the anchor and the negative example.
        ap_distance = y_pred[:, 0]
        an_distance = y_pred[:, 1]

        # Computing the Triplet Loss by subtracting both distances and# making sure we don't get a negative value.
        loss = tf.math.maximum(ap_distance - an_distance + margin, 0.0)
        # tf.print("\n", ap_distance, an_distance)# tf.print(f"\n{loss}\n")return loss

    return triplet_loss

Solution 2:

Ok, here is an easy way to achieve this. We can achieve this by using the loss_weights parameter. We can weigh multiple outputs exactly the same so that we can get the combined loss results. So, for two output we can do

loss_weights = 1*output1 + 1*output2

In your case, your network has two outputs, by the name they are reshape, and global_average_pooling2d. You can do now as follows

# calculation of loss for one output, i.e. reshapedefreshape_loss(y_true, y_pred):
    # do some math with these two return K.mean(y_pred)

# calculation of loss for another output, i.e. global_average_pooling2ddefgap_loss(y_true, y_pred):
    # do some math with these two return K.mean(y_pred)

And while compiling now you need to do as this

model.compile(
    optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate), 
    loss = {
         'reshape':reshape_loss, 
         'global_average_pooling2d':gap_loss
      },
    loss_weights = {
        'reshape':1., 
        'global_average_pooling2d':1.
     }
    )

Now, the loss is the result of 1.*reshape + 1.*global_average_pooling2d.

Post a Comment for "How To Access All Outputs From A Single Custom Loss Function In Keras"