Microsoft yesterday announced the opening of ONNX Runtime, a high-performance inference engine for ONNX-format machine learning models for Linux, Windows and Mac platforms. ONNX Runtime allows developers to train and tune models in any supported framework and run at high performance in the cloud and edge. Microsoft is also using it for Bing search, Bing Ads, Office productivity services and more.
ONNX brings interoperability to the AI framework ecosystem, providing definitions of scalable computational graph models, as well as definitions of built-in operators and standard data types.
ONNX enables models to be trained in one framework and transferred to another framework for reasoning. Currently, Caffe2, Cognitive Toolkit and PyTorch support the ONNX model.
ONNX Runtime is an open architecture that is continually evolving to adapt to and address the newest developments and challenges in AI and Deep Learning. We will keep ONNX Runtime up to date with the ONNX standard, supporting all ONNX releases with future compatibliity while maintaining backwards compatibility with prior releases.
ONNX Runtime continuously strives to provide top performance for a broad and growing number of usage scenarios in Machine Learning. Our investments focus on these 3 core areas:
- Run any ONNX model
- High performance
- Cross platform
More info, please access here.