Intel® Geti™ 2.6.0#
Release Summary#
The Intel® Geti™ 2.6.0 release is now available; the updates and fixes include the following:
Visual Prompting (beta) - users can now explore how visual prompting will help to drastically accelerate data annotation. In this release, the visual prompting workflow is made available as a beta feature for detection and segmentation projects to speed up data annotation and model training, leveraging the SAM large vision model (LVM).
New Model Architectures – the Intel® Geti™ model suite has been expanded with 9 new algorithms.
First-Use Experience Notifications – new in-app guidance, tips and hints, that help new users navigate Intel® Geti™ software when they access specific screens or features for the first time.
Preserve Video Files During Dataset Export/Import – video files are now kept as-is when users export and import data in Datumaro format.
New Dataset Import/Export Progress Bar – helps users to estimate the data import and export time with ease.
Video Processing Improvement For Training And Inference – accelerating the processing of video files for model training and inference.
Optional Remixing Of Train/Validation Subsets - allows advanced users to remix the train and validation subsets between training rounds to optimize their model performance.
Anomaly Task UX Optimization – simplification of the anomaly task user experience by optimizing the model preparation flow to a single anomaly detection task.
Expanded Cross-Project Dataset Mapping – facilitating detection dataset import to a classification task, and oriented detection dataset import to a detection task.
URL Structure Update – Geti’s URL has been modified to include the user’s organization ID, providing a clearer and more intuitive URL hierarchy.
Support For Additional Videos and Images Formats – added support for .m4v videos and .webp images.
Added New Filtering And Sorting Options – media sorting and filtering is now also possible by media file size.
Camera And Media Upload Performance Improvements – the process of capturing and uploading new data directly via the camera is further accelerated.
Video Range Selection for Classification and Anomaly Projects - users can now select and label a frame range in videos as part of classification- and anomaly detection projects to further reduce annotation time.
Dynamic Auto-Training Annotation Threshold - Intel® Geti™ now dynamically defines the number of required annotations based on the model performance and current training dataset size to improve the auto-training UX.
Extension of Media Sorting Functionality – users can sort their media based on annotation date.
Extension of Jobs Sorting Functionality – users can sort their jobs based on creation time.
Refinement Of Model Optimization Process - improved visibility & accessibility of Post-training optimization.
Model License Information – the license type of each model variant is now visible on the model selection page to help users decide which model meets their license requirements.
Added Project Export Controls – to provide users with a unified export experience in the application, project export controls have been added to the Intel® Geti™ interface.
Improved Zoom Functionality – several usability improvements have been made to the zoom functionality in the Annotator screen.
Improved Model Life Cycle Management – a phased approach is implemented to deprecate old model templates in favor of better performing algorithms, so that users can still access, review, and deploy their models.
Release Details#
This section covers additional details on the new Intel® Geti™ functionality.
Visual Prompting#
Visual prompting allows users to prompt a working model with just a single annotation.
On the project page of segmentation-type and detection type-projects, users can select the inference model of choice by opening the AI learning configuration panel: the default active model, or the visual prompting model (‘LVM: SAM’).
The visual prompting model will return predictions as soon as at least one image in the training dataset has been annotated. This can help users to annotate their data faster, even before the first version of their active model is trained.
After the first prompted image has been submitted, the visual prompting model will be available in the Models screen, just like any other model in Intel® Geti™. Here, users will be able to download the SAM model and see the labels and reference features it has learned. As with other models, users can run tests to compare the performance of the visual prompting model with other models that they have trained.
On the Annotator page, users can also compare the model predictions of the SAM model with the active model in the AI Prediction mode.
New Model Architectures#
With this release, nine new model architectures have been added to the Intel® Geti™ model suite listed below:
- Classification
EfficientNet_V2_L
EfficientNet_B3
MobileNet_V3_small
- Detection
RTMDet_tiny
RTDetr_101
RTDetr_50
RTDetr_18
- Semantic segmentation
DINOV2_S
- Instance segmentation
RTMDet_tiny
First-Use Experience Notifications#
First-use guidance is available for new users to help navigate the software and its capabilities.
Various notifications will guide users through the model preparation workflow and highlight features that will help them to successfully train their first model.
Each hint contains a link to additional information in the User Guide.
You can reset the first-use experience via the ‘Reset help dialogs’ option under Help button on the right top.
For example, once a job finishes you can compare the annotation versus prediction quality in this dedicated AI prediction mode:
Preserve Video Files During Dataset Export/Import in Datumaro Format#
From this version onwards, video files can be preserved during Datumaro dataset export/import. You can now choose whether to export only the annotated frames as images instead of the entire video, or to preserve the video files during dataset export in Datumaro format.
New Dataset Import/Export Progress Bar#
When importing or exporting datasets, you can now view the progress in the UI and estimate the preparation time accordingly. This time estimation can especially be useful when preparing large datasets.
Optional Remixing of Train/Validation Subsets#
Advanced users can now enable remixing of the train and validation subsets between every model training round. Remixing those subsets can improve the test performance and active learning efficiency. This option has been added as a reconfigurable parameter for the active model.
Anomaly Task UX Optimization#
To simplify the anomaly task flow and optimize the performance for users, the anomaly task suite now consists of a single anomaly task type, Anomaly Detection, which provides image-level labels. No local annotations are required anymore to train an anomaly algorithm. All existing anomaly-type projects are automatically converted to Anomaly Detection.
Expanded Cross-Project Dataset Mapping#
To further build out the cross-project mapping capabilities in the Intel® Geti™ software, additional mapping capabilities have been added. They allow dataset export/import operations from detection to classification projects and from rotated detection to detection projects.
URL Structure Update#
All in-app URLS have been changed to include the Geti organization ID, for example:
Previously
<ip_address>/workspaces/<workspace_id>
Current
<ip_address> /organizations/<organization_id>/workspaces/<workspace_id>
This change enables you to easily get access to the identifier of the organization that you’re currently logged into, and to extract the organization ID when using Intel® Geti™ via REST API.
Video Range Selection for Classification & Anomaly Projects#
To significantly reduce the time and effort required for video annotation, users can now select and label a frame range of videos as part of classification- and anomaly detection projects. This functionality is available via the Datasets screen by clicking on the three dots of a video, and then click on ‘Select frames for training’.
Dynamic Auto-Training Annotation Threshold#
In auto-training mode, Intel® Geti™ currently requires a fixed number of newly annotated images (12) before auto-training starts so that a model is improved in an iterative way. When the annotated dataset grows, the auto-training threshold should increase as well so that the number of newly annotated images is in proportion with the dataset that has been used during the previous training round. The adaptive auto-training threshold will be based on the model performance and the current training dataset size. This functionality will prevent the initiation of redundant training jobs and will reduce inefficient use of computational resources.
To enable the adaptive auto-training threshold, users can toggle the “Dynamic Required Annotations” option in the AI learning configuration panel when auto-training is enabled.
Refinement Of Model Optimization Process#
Previously there were three types of model optimization options available in Intel® Geti™: HPO, NNFC and POT. To make sure that Intel® Geti™ provides the best optimization results to users in the easiest way possible, the model optimization process has been improved and simplified to only include POT (Post-training Optimization).
To make the Post-training Optimization option more visible to users, the optimized model architecture is already highlighted as an option in the Models screen. Users can simply click ‘Start optimization’, and the optimized model will become available. Once the optimization is finished, the button will disappear, and the results will appear in the model variants overview.
Added Project Export Controls#
To provide users with a unified export experience in the application, project export controls have been added to the Intel® Geti™ interface. Users can now cancel initiated project export jobs, and manually download the file when project export has finished.