Post Intel® Geti™ workflow on Intel hardware#
In this section of the documentation, you will learn how you can test, optimize, and deploy your trained model, and see how it performs on Intel® Hardware.
Intel® Geti™ utilizes an open-source OpenVINO™ toolkit for optimizing your trained model. However, the toolkit itself consists of many components that offer a high degree of granularity and customization for software developers.
You can download Intel® Distribution of OpenVINO™ Toolkit or access the repository on GitHub.
Test#
What we recommend for testing your models is Deep Learning Workbench (DL Workbench), a graphical user interface designed for OpenVINO.
To install, DL Workbench follow this installation guide. You can also install DL Workbench on Intel® DevCloud if you do not have access to a wide range of Intel® architectures. Intel® DevCloud offers free access to registered users.
Optimize#
You can optimize your trained models within Intel® Geti™. Follow our end-to-end tutorial.
Alternatively, you can use components from OpenVINO - Model Optimizer, a cross-platform command line tool that converts models from one framework to the other as well as the Post-Training Optimization Tool or Neural Network Compression Framework (NNCF), which provide a suite of advanced algorithms for neural network inference optimization in OpenVINO™ with minimal accuracy drop.
Deploy#
Once you tested and optimized the model, you are ready to deploy your solution. You can do this by leveraging Inference Engine, a set of C++ libraries with C and Python bindings for executing inference on a number of supported devices.