2022.1
Major Features and Improvements Summary
This release is the biggest upgrade in 3.5 years! Read the release notes below for a summary of changes.
2022.1 release provides functional bug fixes, and capability changes for the previous 2021.4.2 LTS release. This new release empowers developers with new performance enhancements, more deep learning models, more device portability, and higher inferencing performance with fewer code changes.
Note: This is a standard release intended for developers that prefer the very latest features and leading performance. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). ead Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details. Latest LTS releases: 2020.x LTS and 2021.x LTS.
-
Updated, cleaner API:
-
New OpenVINO API 2.0 was introduced. The API aligns OpenVINO inputs/outputs with frameworks. Input and output tensors use native framework layouts and element types. Old Inference Engine and nGraph APIs are available but will be deprecated in a future release down the road.
-
inference_engine, inference_engine_transformations, inferencengine_lp_transformations and ngraph libraries were merged to common openvino library. Other libraries were renamed. Please, use common ov:: namespace inside all OpenVINO components. See how to implement Inference Pipeline using OpenVINO API v2.0 for details.
-
Model Optimizer’s API parameters have been reduced to minimize complexity. Performance has been significantly improved for model conversion on ONNX models.
-
It’s highly recommended to migrate to API 2.0 because it already has additional features and this list will be extended later. The following list of additional features is supported by API 2.0:
-
Working with dynamic shapes. The feature is quite useful for best performance for Neural Language Processing (NLP) models, super-resolution models, and other which accepts dynamic input shapes. Note: Models compiled with dynamic shapes may show reduced performance and consume more memory than models configured with a static shape on the same input tensor size. Setting upper bounds to reshape the model for dynamic shapes or splitting the input into several parts is recommended.
-
Preprocessing of the model to add preprocessing operations to the inference models and fully occupy the accelerator and free CPU resources.
-
-
Read the Transition Guide for migrating to the new API 2.0.
-
-
Portability and Performance:
-
New AUTO plugin self-discovers available system inferencing capacity based on model requirements, so applications no longer need to know its compute environment in advance.
-
The OpenVINO™ performance hints are the new way to configure the performance with portability in mind. The hints “reverse” the direction of the configuration in the right fashion: rather than map the application needs to the low-level performance settings, and keep an associated application logic to configure each possible device separately, the idea is to express a target scenario with a single config key and let the device to configure itself in response. As the hints are supported by every OpenVINO™ device, this is a completely portable and future-proof solution.
-
Automatic batching functionality via code hints automatically scale batch size based on XPU and available memory.
-
-
Broader Model Support:
- With Dynamic Input Shapes capabilities on CPU, OpenVINO will be able to adapt to multiple input dimensions in a single model providing more complete NLP support. Dynamic Shapes support on additional XPUs expected in a future dot release.
-
New Models with focus on NLP and a new category, Anomaly detection, and support for conversion and inference of select PaddlePaddle models:
-
Pre-trained Models: Anomaly segmentation focus on industrial inspection making Speech denoising trainable plus updates on speech recognition and speech synthesis
-
Combined Demo: Noise reduction + speech recognition + question answering + translation+ text to speech
-
Public Models: Focus on NLP ContextNet, Speech-Transformer, HiFi-GAN, Glow-TTS, FastSpeech2, and Wav2Vec
-
-
Built with 12th Gen Intel® Core™ 'Alder Lake' in mind. Supports the hybrid architecture to deliver enhancements for high-performance inferencing on CPU & integrated GPU
You can find OpenVINO™ toolkit 2022.1 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2022.1.0
- OpenVINO™ Development tools:
pip install openvino-dev==2022.1.0
Release documentation is available here: https://docs.openvino.ai/2022.1/
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes.html