Skip to content Skip to sidebar Skip to footer

How To Regularize Loss Function?

I'm learning tensorflow and I'm having some trouble understanding how to regularize the cost function. I've looked and I'm finding a lot of different answers. Could someone please

Solution 1:

In TensorFlowL2 (Tikhonov) regularization with regularization parameter lambda_could be written like this:

# Assuming you defined a graph, placeholders and logits layer.
# Using cross entropy loss:
lambda_ = 0.1
xentropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=logits)
ys = tf.reduce_mean(xentropy)
l2_norms = [tf.nn.l2_loss(v) for v in tf.trainable_variables()]
l2_norm = tf.reduce_sum(l2_norms)
cost = ys + lambda_*l2_norm
# from here, define optimizer, train operation and train ... :-)

Solution 2:

Basically, you just define regularizer function inside desired layer.

tf.keras.layers.Conv2D(filters,
                       kernel_size,
                       strides=strides,
                       padding=padding,
                       ...
                       kernel_regularizer=tf.keras.regularizers.l2()
                       )

With Estimator API or low level tensorflow you sum all regularizers to your loss value. You can get it with tf.losses.get_regularization_loss() and either just add it to loss or use tf.losses.get_total_loss() Keras will handle it internally.


Post a Comment for "How To Regularize Loss Function?"