src.scoring module
Scoring metrics for concept quality evaluation.
Provides various metrics for quantifying how well a concept explains neuron behavior by comparing activations on synthetic concept images versus control images.
- class src.scoring.Metric(*values)[source]
Bases:
EnumEnumeration of concept scoring metrics.
- AUC
Area under ROC curve between control and synthetic activations.
- MAD
Mean absolute difference normalized by control activation standard deviation.
- AVG_ACTIVATION
Average activation magnitude on synthetic images.
- AUC = <function _calculate_auc>
- AVG_ACTIVATION = <function _calculate_average_activation>
- MAD = <function _calculate_mad>
- src.scoring.calculate_metric(neuron_control_activations: torch.Tensor, neuron_synthetic_activations: torch.Tensor, metric: Metric) float[source]
Calculate the specified metric for a concept.
Computes the selected scoring metric by comparing control and synthetic neuron activations. Different metrics measure different aspects of how well a concept explains the neuron.
- Parameters:
neuron_control_activations – Neuron activations from control images.
neuron_synthetic_activations – Neuron activations from synthetic concept images.
metric – The metric to compute (AUC, MAD, or AVG_ACTIVATION).
- Returns:
Score value for the selected metric.
- Raises:
ValueError – If activations are not one-dimensional.