Intel® Geti™ 1.0.0 Beta#
About the Intel® Geti™ platform#
The Intel® Geti™ platform enables enterprise teams to rapidly build computer vision AI models. Through an intuitive graphical interface, users add image or video data, make annotations, train, retrain, export, and optimize AI models for deployment. Equipped with state-of-the-art technology such as active learning, task chaining, and smart annotations, the Intel® Geti™ platform reduces labor-intensive tasks, enables collaborative model development, and speeds up model creation.
System Requirements for 1.0.0-beta#
The platform can be installed on a machine with the following minimum hardware requirements:
CPU for workstations: Intel® Core™ i7, Intel® Core™ i9 or Intel® Xeon® scalable processors family capable of running 20 concurrent threads (in case of using the default K3s) or 48 concurrent threads (in case of using pre-installed K8s).
Note
From Intel® Core™ family, we recommend the following CPU series:
13th gen (Raptor Lake): Intel® Core™ i7 13700 series and Intel® Core™ i9 13900 series
12th gen (Alder Lake): Intel® Core™ i9 12900 series
CPU for cloud deployments: CPUs capable of running min. 24 concurrent threads for K3s or min. 48 concurrent threads for K8s (so for example, on AWS EC2 instances, this requirement would be translated to min. 24 vCPUs for K3s or min. 48 vCPUs for K8s).
GPU: min. one NVIDIA GPU with min. 16GB of memory (e.g. RTX 3090, RTX 6000, RTX 8000, Tesla A100, Tesla V100, Tesla P100, or Tesla T4), other NVIDIA GPUs in a similar series are likely also compatible if they meet minimum memory requirements. However, the full range of devices is not fully tested and not specifically supported. We recommend 24GB of memory for the stable training & optimization.
Memory: min. 64 GB RAM (128 GB recommended)
Disk Space: min. 500 GB (1 TB recommended) for a root partition
Installation of the platform on a multinode configuration of Kubernetes cluster is not supported.
Known Issues#
Component |
Summary |
Workaround |
---|---|---|
Platform |
License check fails on 5.14.0-1024-oem kernel |
Do not use a version newer than 5.13.0-30-generic of the linux kernel on the machine for the platform. |
Platform |
Upgrade to a newer version is not reverted during failure |
If upgrade fails, rerun it to verify if the problem reappears. If so, contact support. |
Platform |
Upgrade to a newer version is not reverted during failure |
If, in rare cases, your training fails, start it over. If it fails again, contact support. |
Platform |
Occasional training failure with CUDA error: all CUDA-capable devices are busy or unavailable |
If upgrade fails, rerun it to verify if the problem reappears. If so, contact support. |
Platform |
Occasional training failure because of an inference job not created |
If, in rare cases, your training fails, start it over. If it fails again, contact support. |
Platform |
Occasional training failure with CUDA error: out of memory |
If, in rare cases, your training fails, start it over. If it fails again, contact support. |
Platform |
Occasional training failure because of an inference job not created |
If, in rare cases, your training fails, start it over. If it fails again, contact support. |
Installer |
[Offline Upgrade] offline upgrade fails with absolute path |
When going though the installation manual do not set OFFLINE_DEPENDENCIES_LOCATION and copy the files to the default location, which is the platform/installer/offline folder. |
NNCF |
In-training optimization (NNCF quantization) fails on Lite-HRNet-x-mod3 |
Use post-training optimization. |
Model Templates |
Optimization fails for MaskRCNN-EfficientNet Instance Segmentation model |
For post-training optimization use HW with more RAM. For in-training optimization use GPU with more memory. |