![]() |
Class Loss
Loss base class.
Aliases:
- Class
tf.compat.v1.keras.losses.Loss
- Class
tf.compat.v2.keras.losses.Loss
- Class
tf.compat.v2.losses.Loss
To be implemented by subclasses:
* call()
: Contains the logic for loss calculation using y_true
, y_pred
.
Example subclass implementation:
class MeanSquaredError(Loss):
def call(self, y_true, y_pred):
y_pred = ops.convert_to_tensor(y_pred)
y_true = math_ops.cast(y_true, y_pred.dtype)
return K.mean(math_ops.square(y_pred - y_true), axis=-1)
When used with tf.distribute.Strategy
, outside of built-in training loops
such as tf.keras
compile
and fit
, please use 'SUM' or 'NONE' reduction
types, and reduce losses explicitly in your training loop. Using 'AUTO' or
'SUM_OVER_BATCH_SIZE' will raise an error.
Please see https://www.tensorflow.org/alpha/tutorials/distribute/training_loops for more details on this.
You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:
with strategy.scope():
loss_obj = tf.keras.losses.CategoricalCrossentropy(
reduction=tf.keras.losses.Reduction.NONE)
....
loss = (tf.reduce_sum(loss_obj(labels, predictions)) *
(1. / global_batch_size))
Args:
reduction
: (Optional) Type oftf.keras.losses.Reduction
to apply to loss. Default value isAUTO
.AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE
. When used withtf.distribute.Strategy
, outside of built-in training loops such astf.keras
compile
andfit
, usingAUTO
orSUM_OVER_BATCH_SIZE
will raise an error. Please see https://www.tensorflow.org/alpha/tutorials/distribute/training_loops for more details on this.name
: Optional name for the op.
__init__
__init__(
reduction=losses_utils.ReductionV2.AUTO,
name=None
)
Initialize self. See help(type(self)) for accurate signature.
Methods
tf.keras.losses.Loss.__call__
__call__(
y_true,
y_pred,
sample_weight=None
)
Invokes the Loss
instance.
Args:
y_true
: Ground truth values. shape =[batch_size, d0, .. dN]
y_pred
: The predicted values. shape =[batch_size, d0, .. dN]
sample_weight
: Optionalsample_weight
acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. Ifsample_weight
is a tensor of size[batch_size]
, then the total loss for each sample of the batch is rescaled by the corresponding element in thesample_weight
vector. If the shape ofsample_weight
is[batch_size, d0, .. dN-1]
(or can be broadcasted to this shape), then each loss element ofy_pred
is scaled by the corresponding value ofsample_weight
. (Note ondN-1
: all loss functions reduce by 1 dimension, usually axis=-1.)
Returns:
Weighted loss float Tensor
. If reduction
is NONE
, this has
shape [batch_size, d0, .. dN-1]
; otherwise, it is scalar. (Note dN-1
because all loss functions reduce by 1 dimension, usually axis=-1.)
Raises:
ValueError
: If the shape ofsample_weight
is invalid.
tf.keras.losses.Loss.call
call(
y_true,
y_pred
)
Invokes the Loss
instance.
Args:
y_true
: Ground truth values, with the same shape as 'y_pred'.y_pred
: The predicted values.
tf.keras.losses.Loss.from_config
@classmethod
from_config(
cls,
config
)
Instantiates a Loss
from its config (output of get_config()
).
Args:
config
: Output ofget_config()
.
Returns:
A Loss
instance.
tf.keras.losses.Loss.get_config
get_config()