Intel® Geti™ 1.5.0#

Release Summary#

These are the key highlights of Intel® Geti™ 1.5.0:

Other Intel® Geti™ 1.5.0 updates include:

Note

Intel® Geti™ 1.5.0 contains functionality added in intermediary platform versions: Intel® Geti™ 1.3.0 and 1.4.0. To learn more about all the new capabilities, read the related release notes for Intel® Geti™ 1.3.0 and Intel® Geti™ 1.4.0.

Release Details#

This section covers additional details on the new functionality available with Intel® Geti™ 1.5.0.

New model export formats#

The new model export formats provide greater flexibility and compatibility with your workflows:

  • Export models in ONNX model format for versatile deployment.

  • Leverage FP16 quantized models for a performance boost in supporting inference hardware.

  • Choose between MO FP32 models with or without the saliency map head.

Prior to this release, exportable models always output feature_vectors and saliency maps, potentially impacting deployment performance with additional compute requirements. With this update, users can choose which model to deploy:

  • The model with the current hardcoded output, including feature_vectors and saliency maps

  • The model optimized for inference, without feature_vectors and saliency maps

The two models are exposed, allowing to make a trade-off between model explainability, accuracy, and performance.

Note

For both new and existing projects, model names will be updated to differentiate between various accuracy levels and the presence or absence of model explainability. So, if you use scripts that directly call the REST API, keep that change in mind.

Playback video data with ease#

The video player offers new functionality to help label data and assess model performance.

In the annotator screen, navigate through video data and pause at any time to make annotations in the desired frames. Plus, you have the option to watch real-time video data in different modes: with annotations, with label predictions, and the original video without overlays.

Placeholder

Intel® Geti™ SDK available through PyPi#

The Intel® Geti™ SDK (geti-sdk) is now available via PyPi for a painless auto-install experience.

The Intel® Geti™ platform SDK support for Representational State Transfer (REST) API helps simplify and automate development pipelines, including project creation, data and model download, and deployment.

Access the Intel® Geti™ repository in PyPi.

Easy label management#

Create and modify labels easily while creating a project by clicking the respective label entry field.

Placeholder

For task chains, task information is now displayed for each label and in the label management tree for a clear view of the project structure on the Labels screen.

Placeholder

Expanded support for Datumaro format: hierarchical labels#

Hierarchical classification labels are now supported in the Datumaro format. The platform also supports segmentation, detection, single-label classification, and all anomaly tasks.

Learn more about Datumaro’s capabilities in GitHub.

Optimized Tiling Algorithm#

The tiling algorithm was introduced in Intel® Geti™ 1.2.0. It helps boosts model accuracy for small object detection and counting tasks by dividing high-resolution images into smaller tiles during pre-processing.

This update improves the tiling inference process, enabling faster performance while maintaining high accuracy.

Learn more about the platform’s customized Mask R-CNN model used in the tiling functionality.

Convenient access to annotations and predictions per label#

Filter annotations and predictions by label in the annotator screen to easily evaluate results. Access this functionality through the filter option, as shown below:

Placeholder

System Requirements for Intel® Geti™ 1.5.0#

The platform can be installed on a machine with the following minimum hardware requirements:

  • CPU for workstations: Intel® Core™ i7, Intel® Core™ i9 or Intel® Xeon® scalable processors family capable of running 20 concurrent threads (in case of using the default K3s) or 48 concurrent threads (in case of using pre-installed K8s).

Note

From Intel® Core™ family, we recommend the following CPU series:

  • 13th gen (Raptor Lake): Intel® Core™ i7 13700 series and Intel® Core™ i9 13900 series

  • 12th gen (Alder Lake): Intel® Core™ i9 12900 series

  • CPU for cloud deployments: CPUs capable of running min. 24 concurrent threads for K3s or min. 48 concurrent threads for K8s (so for example, on AWS EC2 instances, this requirement would be translated to min. 24 vCPUs for K3s or min. 48 vCPUs for K8s).

  • GPU: min. one NVIDIA GPU with min. 16GB of memory (e.g. RTX 3090, RTX 6000, RTX 8000, Tesla A100, Tesla V100, Tesla P100, or Tesla T4), other NVIDIA GPUs in a similar series are likely also compatible if they meet minimum memory requirements. However, the full range of devices is not fully tested and not specifically supported. We recommend 24GB of memory for the stable training & optimization.

  • Memory: min. 64 GB RAM (128 GB recommended)

  • Disk Space: min. 500 GB (1 TB recommended) for a root partition

Installation of the platform on a multinode configuration of Kubernetes cluster is not supported.

Known Issues#

Components

Summary

Workaround

Platform

Reset the user password script hasn’t been working since the 1.2 release.

This issue has been fixed with the 1.5 release. If a user forgets his/her password on an instance that does not have an SMPT (mail) server configured, run the platform_update_password.sh script from the installation package to generate a new password for the user.

Platform

Inference servers appear to be stuck.

Occasionally, training jobs may remain stuck for several minutes with status requesting the initialization of an inference server before completing. The issue is more likely to happen if the project contains partially unannotated videos.

Platform

Inference on GPU fails (deployment.load_inference_models(device='GPU').

This issue seems to be connected to a problem with GPU inference in OpenVINO 2022.3. We recommend installing OpenVINO 2023.0 to address this issue. Please note that you should install it after setting up the Geti SDK, as both the SDK and OTX still require OpenVino 2022.3. Installing OpenVino 2023.0 post-installation will upgrade OpenVINO while maintaining the functionality of the Geti SDK and OTX, which should effectively resolve the GPU inference problem.