Annotation Screen#
This section introduces the core layout of the Annotator Editor and its functions.
The Annotation Editor is divided into five main interaction sections.
Below, we will explain what these sections are for.
Header#
In the Header you can find the following functionality:
Icon |
Action |
---|---|
Go back to project screen. |
|
Select a step in the task chain. See the details. |
|
View annotations made by the user or by the system. |
|
Enable/disable training that is triggered after annotating a required quota of objects. |
|
See a number of annotations required for the training to commence. |
|
Accuracy progress bar. See the details. |
|
Check the status of running/finished/scheduled/failed jobs. Search through jobs and filter & sort by: who created the job, start time or scheduled start time. The availability of job scheduler functionalities depends on user access levels. |
Task Chain Details#
In a Detection > Classification task chain, you can switch between both tasks, i.e. detection and classification, with buttons at the top of the annotation screen. Similarly, in a Detection > Segmentation task chain, you can switch between both tasks, i.e. detection and segmentation, with buttons at the top of the annotation screen. The first and default option ALL TASKS gives you a possibility to create annotations for both tasks. You can decide to work only on one part of the chain task. Choose either DETECTION to create annotations for detection only or CLASSIFICATION/SEGMENTATION, depending on the chain task type, to classify or identify the objects.
Hint
Switching between tasks in a chain can come in handy when there is two or more team members responsible for a different task in the chain. They can focus only on one part of the task without interfering.
Accuracy Progress Bar Details#
Accuracy progress bar shows an accuracy metric for the most recently trained model.
Primary Toolbar#
In the Primary Toolbar you can find the following functionality. Some of them depend on the project you choose
Icon |
Action |
---|---|
Select the Selector tool to edit, hide, or delete annotations in an image. |
|
Select the Bounding Box tool to draw a rectangle around an object. |
|
Select the Detection Assistant tool to draw a rectangle around an object so that the system can highlight similar-looking objects. |
|
Select the Rotated Bounding Box tool to encompass an object with the annotation tightly from the top view. |
|
Select the Circle tool to draw a circle around an object. |
|
Select the Polygon tool to draw a free form around an object / a delineation of an object. |
|
Select the Quick Selection tool to draw a rectangle around an object that automatically draws a free form / polygon around that objects. |
|
Select the Object Coloring tool to stroke a brush / draw a line on the objects whose shape will be automatically delineated. |
|
Select the Interactive Segmentation tool to automatically annotate the object of interest by embedding a dot mark inside the object. |
|
Select the heatmap tool to to visualize the focal area of annotations in an image. |
|
Undo the annotation you created. |
|
Return to the state before the undone annotation. |
|
Fit the image to the screen. |
|
or |
Zoom in and out on the image. |
or |
Hide or show annotations. |
Change appearance. |
|
Display the keyboard shortcuts. |
Secondary Toolbar#
In the Secondary Toolbar you can find the following functionality:
Icon |
Action |
---|---|
Display the name of the tool and its options |
|
Switch to the previously annotated media item |
|
Switch to the next media item to be annotated |
|
Accept the annotated media item |
Annotation List & Media Gallery#
The right-hand side panel of the screen displays the annotations you created in an image or video frame and also displays a media gallery in two modes - Active set (by default) or Data set. You can hide the panel and resize the annotation list and media gallery.
Annotation List#
The annotation list is a collection of all the annotations you created in a media item (image or video frame). You will see the list upon creating your first annotation. The more annotations you create, the more items will appear in the list.
In the annotation list, you can find the following functionality:
Icon |
Action |
---|---|
Select a label for the annotation |
|
Select all annotations |
|
Hide annotations |
|
Remove annotations |
|
Edit labels for an annotation, select a few and edit them simultaneously |
|
Lock an annotation to disable an operation on it e.g. locking the annotation will prevent it from being removed during the batch removal |
Media Gallery#
In the media gallery, you can switch between Active set and Data set.
Active set is set by default in the system and displays media items in an order that is best for creating a well-balanced and fully functional model. However, you can switch to Data set to display the media items in an order that was arranged in your dataset (folder with your media items).
If you selected active set, you can refresh it at any time.
Hint
It is advisable to refresh the Active set after training a new model. There may be a new and more accurate order for media items in which they should be annotated.
To select a specific media item in the gallery, click on the desired image or frame thumbnail. The image or frame, together with its annotations, will then appear on the canvas. To view subsequent media items, scroll with your mouse over the gallery or use the scroll bar located at the right of the gallery.
Annotation Canvas#
The annotation canvas takes up the majority of the screen estate and contains the image or video frame to be annotated. The primary controls for the canvas include:
Panning the image: Mouse Middle Click + Drag or Ctrl + Mouse Left Click + Drag
Zooming in/out: scrolling the Mouse Wheel Up/Down
In addition, the annotation canvas is also the place for you to create, edit and delete annotations.
You can change the canvas settings by clicking on in the primary toolbar on the left. Upon clicking on the icon, you will be able to change the following options:
Label opacity - the opacity level for labels on the canvas
Annotation fill opacity - the opacity level for annotations on the canvas
Annotation border opacity - the opacity level for annotation borders on the canvas
Image brightness - the image brightness of a media item to annotate
Image contrast - the image contrast of a media item to annotate
Image saturation - the image saturation of a media item to annotate
Pixel view toggle - pixelated image rendering while zooming in or out, e.g.
Default |
Pixel view |
---|---|
Zoom in on the images to see the difference.
The settings are saved per project and do not propagate to the training, which means they do not influence model training. They are visualizations in the user interface for you to manage the visibility of annotations.
You can also move the adjustments box on the annotation canvas.
Selecting labels#
You can label the image by:
typing the name of the label in the label field on the Annotations list on the right hand side of the screen and pressing Enter
clicking on the field on the Annotations list and choosing a label from the dropdown menu
clicking on Select label attached to a top frame line of the image in a Classification project
clicking on the recently used labels in the top right-hand side of the annotation screen
clicking on More in the top right-hand side of the annotation screen
leveraging shortcuts set during project creation, e.g. CTRL+1
typing the name of the label in the search box Select label in the top left-hand corner of the screen or choosing a label from the dropdown menu in a Detection project
Predictions vs Annotations#
The Intel® Geti™ platform accommodates the users by offering a set of tools and features to automate the process of annotation. One of these features is continuous model creation while you annotate the images in your dataset. Upon annotating a given number of images yourself, the application will start annotating the images for you and prompt you to accept, reject, or edit the annotation.
The mode will automatically switch on once the first round of training finishes, however, you can switch between these modes at any moment provided that the training job has finished.
Note
The difference between annotation and prediction is that in the first one - you do the work, in the second one - you supervise the work. Annotation is manual and performed by a user while prediction is automatic and performed by the system.
Annotation |
Prediction |
---|---|
The Submit button changes to the Accept button to confirm all predictions once the first round of training has finished. If the prediction is not correct, you can click on Edit to rectify the prediction in the Annotation panel or edit the label name. You will see a list on annotations you can hide, remove, or edit.
Note
The placement of the Edit button depends on the type of project you will create. In the classification project, it is next to the Accept button. In the detection and segmentation projects, the Edit button is placed in the Annotation panel.
You can filter predictions in the Annotation panel based on a prediction score. Click on the Filter by score dropdown menu and move the slider. You can move the slider to the left to lower the score level or to the right to increase the score level.
Saliency map#
No map |
With map |
---|---|
The saliency map feature becomes activate after the completion of the first round of training. The saliency map highlights the region/pixels in an image that were relevant for a computer vision task by a neural network.
You can select a map from the drop down menu which changes the map per label. If you have more than one label for the image e.g. cat and dog, you will have two maps: one for a cat label and one for a dog label. When you select the first map (cat label), you will see the pixels that contributed to this prediction. Then, you should choose the dog label map to see the pixels that contributed to the other prediction - dog label. You can also leverage the maps with only one label per image.
Important
These maps are available only for the classification and segmentation projects. The detection project does not support this feature since detection algorithms do not work per pixel.
Also, you can change the opacity of the map applied to the image by moving the slider.
Let’s consider some real-life examples.
If you go about classifying ships, the saliency map may reveal the information about the significance of water in recognizing ships. In the case of this ship classifier, if the saliency map shows that the water is very important for classifying ships, you might want to provide a more diverse dataset by adding images of ships that are outside the water since the goal is not to classify water but to classify the ship.
These maps are useful for spotting correlated data. Similar to the boat example, maybe you want to classify if a person is in a room or not. It could be that the light is turned on every time a person enters the room. The model might just end up learning to recognize when the lights are on/off as this is highly correlated with person/no person. The saliency map would show activations on the bulb rather than the person.