🟪 1-Minute Summary

ROC (Receiver Operating Characteristic) curve plots TPR (recall) vs FPR at all classification thresholds. AUC (Area Under Curve) summarizes ROC in one number (0 to 1). AUC = 1 is perfect, 0.5 is random guessing. Use to evaluate model’s ability to distinguish classes across all thresholds, independent of class distribution. Better than accuracy for imbalanced data.


🟦 Core Notes (Must-Know)

What is ROC Curve?

[Content to be filled in]

How to Read ROC Curve

[Content to be filled in]

What is AUC?

[Content to be filled in]

Interpreting AUC Values

[Content to be filled in]

  • AUC = 1.0: Perfect classifier
  • AUC = 0.9-1.0: Excellent
  • AUC = 0.8-0.9: Good
  • AUC = 0.7-0.8: Fair
  • AUC = 0.5-0.7: Poor
  • AUC = 0.5: Random guessing

When to Use ROC-AUC

[Content to be filled in]


🟨 Interview Triggers (What Interviewers Actually Test)

Common Interview Questions

  1. “Explain the ROC curve”

    • [Answer: Plots TPR vs FPR at all thresholds]
  2. “What does AUC = 0.85 mean?”

    • [Answer: 85% chance model ranks random positive higher than random negative]
  3. “When would you use ROC-AUC vs precision-recall curve?”

    • [Answer: ROC-AUC for balanced data, PR curve for imbalanced]
  4. “What does a diagonal ROC curve mean?”

    • [Answer: Random classifier (AUC = 0.5)]

🟥 Common Mistakes (Traps to Avoid)

Mistake 1: Using ROC-AUC for highly imbalanced data

[Content to be filled in - use PR curve instead]

Mistake 2: Thinking AUC tells you the best threshold

[Content to be filled in]


🟩 Mini Example (Quick Application)

Scenario

[Compare classifiers using ROC-AUC]

Solution

from sklearn.metrics import roc_curve, roc_auc_score
import matplotlib.pyplot as plt

# Example to be filled in


Navigation: