Apparently, the Open Neural Network Exchange (ONNX) runtime is an API so you can run models locally instead of on another machine.<p>I didn't see any details about the inference engine, so I assume this is a neural net AI application programming interface instead of a symbolic AI inferencing engine.