Annotation Screen#

This section introduces the core layout of the Annotator Editor and its functions.

Placeholder

The Annotation Editor is divided into five main interaction sections.

  1. Header

  2. Primary Toolbar

  3. Secondary Toolbar

  4. Annotation List & Media Gallery

  5. Annotation Canvas

Below, we will explain what these sections are for.

Primary Toolbar#

In the Primary Toolbar you can find the following functionality. Some of them depend on the project you choose

Icon

Action

select-tool-icon

Select the Selector tool to edit, hide, or delete annotations in an image.

bounding-box-icon

Select the Bounding Box tool to draw a rectangle around an object.

SSIM-icon

Select the Detection Assistant tool to draw a rectangle around an object so that the system can highlight similar-looking objects.

rotated-bb-icon

Select the Rotated Bounding Box tool to encompass an object with the annotation tightly from the top view.

circle-tool-icon

Select the Circle tool to draw a circle around an object.

polygon-tool-icon

Select the Polygon tool to draw a free form around an object / a delineation of an object.

object-selection-icon

Select the Quick Selection tool to draw a rectangle around an object that automatically draws a free form / polygon around that objects.

object-coloring-icon

Select the Object Coloring tool to stroke a brush / draw a line on the objects whose shape will be automatically delineated.

interactive-segmentation-icon

Select the Interactive Segmentation tool to automatically annotate the object of interest by embedding a dot mark inside the object.

heatmap

Select the heatmap tool to to visualize the focal area of annotations in an image.

undo-icon

Undo the annotation you created.

redo-icon

Return to the state before the undone annotation.

fit-screen-icon

Fit the image to the screen.

zoom-in-icon or zoom-out-icon

Zoom in and out on the image.

visible-icon or invisible-icon

Hide or show annotations.

change-appearance-icon

Change appearance.

hotkeys-icon

Display the keyboard shortcuts.

Secondary Toolbar#

In the Secondary Toolbar you can find the following functionality:

Icon

Action

secondary-toolbar-options

Display the name of the tool and its options

previous-button

Switch to the previously annotated media item

next-button

Switch to the next media item to be annotated

submit-button

Accept the annotated media item

Annotation Canvas#

The annotation canvas takes up the majority of the screen estate and contains the image or video frame to be annotated. The primary controls for the canvas include:

  • Panning the image: Mouse Middle Click + Drag or Ctrl + Mouse Left Click + Drag

  • Zooming in/out: scrolling the Mouse Wheel Up/Down

In addition, the annotation canvas is also the place for you to create, edit and delete annotations.

Placeholder

You can change the canvas settings by clicking on canvas-settings-icon in the primary toolbar on the left. Upon clicking on the icon, you will be able to change the following options:

  • Label opacity - the opacity level for labels on the canvas

  • Annotation fill opacity - the opacity level for annotations on the canvas

  • Annotation border opacity - the opacity level for annotation borders on the canvas

  • Image brightness - the image brightness of a media item to annotate

  • Image contrast - the image contrast of a media item to annotate

  • Image saturation - the image saturation of a media item to annotate

  • Pixel view toggle - pixelated image rendering while zooming in or out, e.g.

Default

Pixel view

Placeholder Placeholder

Zoom in on the images to see the difference.

The settings are saved per project and do not propagate to the training, which means they do not influence model training. They are visualizations in the user interface for you to manage the visibility of annotations.

You can also move the adjustments box on the annotation canvas.

Selecting labels#

You can label the image by:

  • typing the name of the label in the label field on the Annotations list on the right hand side of the screen and pressing Enter

  • clicking on the field on the Annotations list and choosing a label from the dropdown menu

  • clicking on Select label attached to a top frame line of the image in a Classification project

  • clicking on the recently used labels in the top right-hand side of the annotation screen

  • clicking on More in the top right-hand side of the annotation screen

  • leveraging shortcuts set during project creation, e.g. CTRL+1

  • typing the name of the label in the search box Select label in the top left-hand corner of the screen or choosing a label from the dropdown menu in a Detection project

Predictions vs Annotations#

The Intel® Geti™ platform accommodates the users by offering a set of tools and features to automate the process of annotation. One of these features is continuous model creation while you annotate the images in your dataset. Upon annotating a given number of images yourself, the application will start annotating the images for you and prompt you to accept, reject, or edit the annotation.

The mode will automatically switch on once the first round of training finishes, however, you can switch between these modes at any moment provided that the training job has finished.

Note

The difference between annotation and prediction is that in the first one - you do the work, in the second one - you supervise the work. Annotation is manual and performed by a user while prediction is automatic and performed by the system.

Annotation

Prediction

Placeholder
Placeholder

The Submit button changes to the Accept button to confirm all predictions once the first round of training has finished. If the prediction is not correct, you can click on Edit to rectify the prediction in the Annotation panel or edit the label name. You will see a list on annotations you can hide, remove, or edit.

Note

The placement of the Edit button depends on the type of project you will create. In the classification project, it is next to the Accept button. In the detection and segmentation projects, the Edit button is placed in the Annotation panel.

You can filter predictions in the Annotation panel based on a prediction score. Click on the Filter by score dropdown menu and move the slider. You can move the slider to the left to lower the score level or to the right to increase the score level.

Saliency map#

No map

With map

Placeholder Placeholder

The saliency map feature becomes activate after the completion of the first round of training. The saliency map highlights the region/pixels in an image that were relevant for a computer vision task by a neural network.

You can select a map from the drop down menu which changes the map per label. If you have more than one label for the image e.g. cat and dog, you will have two maps: one for a cat label and one for a dog label. When you select the first map (cat label), you will see the pixels that contributed to this prediction. Then, you should choose the dog label map to see the pixels that contributed to the other prediction - dog label. You can also leverage the maps with only one label per image.

Important

These maps are available only for the classification and segmentation projects. The detection project does not support this feature since detection algorithms do not work per pixel.

Also, you can change the opacity of the map applied to the image by moving the slider.

Let’s consider some real-life examples.

If you go about classifying ships, the saliency map may reveal the information about the significance of water in recognizing ships. In the case of this ship classifier, if the saliency map shows that the water is very important for classifying ships, you might want to provide a more diverse dataset by adding images of ships that are outside the water since the goal is not to classify water but to classify the ship.

These maps are useful for spotting correlated data. Similar to the boat example, maybe you want to classify if a person is in a room or not. It could be that the light is turned on every time a person enters the room. The model might just end up learning to recognize when the lights are on/off as this is highly correlated with person/no person. The saliency map would show activations on the bulb rather than the person.