Out-of-Distribution (OOD) detection is possibly the most important problem for safe and deployable ML, because it provides the first line of defense by preventing silent failures in critical ML systems; bounds AI capabilities by recognition of model knowledge; allows for safe fallback and enables human oversight when needed.<p>Forte takes a novel approach to OOD detection with several key advantages. It utilizes self-supervised representations to capture semantic features while incorporating manifold estimation to account for local topology. Care has been taken to minimize deployment overhead; by eliminating additional model training requirements. Furthermore, it requires no class labels, no exposure to OOD data during training, and no restrictions to architecture of predictive or generative models. We have demonstrated strong domain generalization – having tested it on detecting synthetic data, MRI images etc.