Evaluation Metrics¶
Evaluation metrics provide a way to evaluate the performance of a learned model. This is typically used during training to monitor performance on the validation set.
-
class
AbstractEvalMetric
¶ The base class for all evaluation metrics. The sub-types should implement the following interfaces.
-
update!
(metric, labels, preds)¶ Update and accumulate metrics.
Parameters: - metric (AbstractEvalMetric) – the metric object.
- labels (Vector{NDArray}) – the labels from the data provider.
- preds (Vector{NDArray}) – the outputs (predictions) of the network.
-
reset!
(metric)¶ Reset the accumulation counter.
-
get
(metric)¶ Get the accumulated metrics.
Returns: Vector{Tuple{Base.Symbol, Real}}
, a list of name-value pairs. For example,[(:accuracy, 0.9)]
.
-
-
class
Accuracy
¶ Multiclass classification accuracy.
Calculates the mean accuracy per sample for softmax in one dimension. For a multi-dimensional softmax the mean accuracy over all dimensions is calculated.
-
class
MSE
¶ Mean Squared Error. TODO: add support for multi-dimensional outputs.
Calculates the mean squared error regression loss in one dimension.