evaluation#

Modules#

Classes#

ClassificationEvaluator

Wrapper for the Classification Performance Evaluator from MOA.

ClassificationWindowedEvaluator

Uses the ClassificationEvaluator to perform a windowed evaluation.

RegressionWindowedEvaluator

Uses the RegressionEvaluator to perform a windowed evaluation.

RegressionEvaluator

Wrapper for the Regression Performance Evaluator from MOA.

PredictionIntervalEvaluator

PredictionIntervalWindowedEvaluator

AnomalyDetectionEvaluator

Wrapper for the Anomaly (AUC) Performance Evaluator from MOA.

ClusteringEvaluator

Abstract clustering evaluator for CapyMOA.

Functions#

capymoa.evaluation.prequential_evaluation(
stream: Stream,
learner: Classifier | Regressor,
max_instances: int | None = None,
window_size: int = 1000,
store_predictions: bool = False,
store_y: bool = False,
optimise: bool = True,
restart_stream: bool = True,
) PrequentialResults[source]#

Run and evaluate a learner on a stream using prequential evaluation.

Calculates the metrics cumulatively (i.e. test-then-train) and in a window-fashion (i.e. windowed prequential evaluation). Returns both evaluators so that the user has access to metrics from both evaluators.

Parameters:
  • stream – A data stream to evaluate the learner on. Will be restarted if restart_stream is True.

  • learner – The learner to evaluate.

  • max_instances – The number of instances to evaluate before exiting. If None, the evaluation will continue until the stream is empty.

  • window_size – The size of the window used for windowed evaluation, defaults to 1000

  • store_predictions – Store the learner’s prediction in a list, defaults to False

  • store_y – Store the ground truth targets in a list, defaults to False

  • optimise – If True and the learner is compatible, the evaluator will use a Java native evaluation loop, defaults to True.

  • restart_stream – If False, evaluation will continue from the current position in the stream, defaults to True. Not restarting the stream is useful for switching between learners or evaluators, without starting from the beginning of the stream.

Returns:

An object containing the results of the evaluation windowed metrics, cumulative metrics, ground truth targets, and predictions.

capymoa.evaluation.prequential_ssl_evaluation(
stream: Stream,
learner: ClassifierSSL | Classifier,
max_instances: int | None = None,
window_size: int = 1000,
initial_window_size: int = 0,
delay_length: int = 0,
label_probability: float = 0.01,
random_seed: int = 1,
store_predictions: bool = False,
store_y: bool = False,
optimise: bool = True,
restart_stream: bool = True,
)[source]#

Run and evaluate a learner on a semi-supervised stream using prequential evaluation.

Parameters:
  • stream – A data stream to evaluate the learner on. Will be restarted if restart_stream is True.

  • learner – The learner to evaluate. If the learner is an SSL learner, it will be trained on both labeled and unlabeled instances. If the learner is not an SSL learner, then it will be trained only on the labeled instances.

  • max_instances – The number of instances to evaluate before exiting. If None, the evaluation will continue until the stream is empty.

  • window_size – The size of the window used for windowed evaluation, defaults to 1000

  • initial_window_size – Not implemented yet

  • delay_length – If greater than zero the labeled (label_probability``%) instances will appear as unlabeled before reappearing as labeled after ``delay_length instances, defaults to 0

  • label_probability – The proportion of instances that will be labeled, must be in the range [0, 1], defaults to 0.01

  • random_seed – A random seed to define the random state that decides which instances are labeled and which are not, defaults to 1.

  • store_predictions – Store the learner’s prediction in a list, defaults to False

  • store_y – Store the ground truth targets in a list, defaults to False

  • optimise – If True and the learner is compatible, the evaluator will use a Java native evaluation loop, defaults to True.

  • restart_stream – If False, evaluation will continue from the current position in the stream, defaults to True. Not restarting the stream is useful for switching between learners or evaluators, without starting from the beginning of the stream.

Returns:

An object containing the results of the evaluation windowed metrics, cumulative metrics, ground truth targets, and predictions.

capymoa.evaluation.prequential_evaluation_multiple_learners(
stream,
learners,
max_instances=None,
window_size=1000,
store_predictions=False,
store_y=False,
)[source]#

Calculates the metrics cumulatively (i.e., test-then-train) and in a windowed-fashion for multiple streams and learners. It behaves as if we invoked prequential_evaluation() multiple times, but we only iterate through the stream once. This function is useful in situations where iterating through the stream is costly, but we still want to assess several learners on it. Returns the results in a dictionary format. Infers whether it is a Classification or Regression problem based on the stream schema.

capymoa.evaluation.prequential_evaluation_anomaly(
stream,
learner,
max_instances=None,
window_size=1000,
optimise=True,
store_predictions=False,
store_y=False,
)[source]#

Calculates the metrics cumulatively (i.e. test-then-train) and in a window-fashion (i.e. windowed prequential evaluation). Returns both evaluators so that the user has access to metrics from both evaluators.