- When the classes are no longer easily separable, like linear models, Naive Bayes does not perform as well.
Pros
- are fast to train and use for prediction. (Because naive simplifying assumption, only simple per class statistics need to be estimated for each feature and applied for each feature independently.)
- Works well with high-dimensional datasets.
- often useful as a baseline comparison against more sophisticated methods.
cons
- The assumption that features are conditionally independent given the class is not realistic.
- Generalization performance may worse than more sophisticated learning methods. (Other more sophisticated classification methods that can account for these dependencies are likely to outperform Naive Bayes.)
- when getting confidence or probability estimates associated with predictions, Naive Bayes classifiers produce unreliable estimates, typically.
It can be shown that Naive Bayes Classifiers are related mathematically to linear models, so many of the pros and cons of linear models also apply to Naive Bayes.