Label Operations#
Label editing stands as a cornerstone feature in project management, enabling users to maintain and refine the classification accuracy and relevance of their projects. This capability encompasses a range of operations from the simple to the complex, ensuring that labels reflect the project’s evolving needs.
At its most fundamental level, label editing involves operations such as:
Renaming Labels: Adjusting the name of a label to better reflect its meaning or to correct mistakes.
Changing Colors: Modifying the color associated with a label, which is crucial for visual distinction in project interfaces.
Reassigning Hotkeys: Updating the keyboard shortcuts assigned to labels for more efficient project annotation workflows.
These operations are straightforward, as they pertain solely to the label’s properties, without affecting its relationships within the project.
Beyond these basic edits, label editing can involve more complex management tasks that influence the label’s interaction with other project elements.
Label addition#
Imagine you’re working on a detection project with a straightforward objective: identifying dogs within images. However, as the project progresses, you recognize the value of detecting cats. This decision to include a “cat” label, while seemingly simple, necessitates careful consideration to maintain the integrity of your dataset and the accuracy of your deep learning model.
Adding a new label to an existing project isn’t just a matter of updating project settings; it’s a critical step that can affect the quality of your training data. Before adding the “cat” label, your dataset was annotated under the assumption that only dogs were of interest. Without revisiting these annotations, there’s a risk of inadvertently misleading your algorithm — potentially ignoring cats present in previously annotated images.
The Intel® Geti™ platform addresses this challenge by offering the option to mark previously annotated data for review, using the to-revisit
flag. This crucial feature allows users to:
Ensure comprehensive annotation: By flagging existing data for review, you can verify whether newly relevant objects (e.g., cats) are present in images previously annotated under old criteria.
Utilize media filtering: This tool facilitates the efficient review of flagged images, ensuring that your dataset accurately reflects the project’s expanded scope.
For more complex projects, the process of label addition can involve several strategic decisions, including:
Adding Labels Within an Existing Group: This option is suited for projects where the new label shares a contextual relationship with existing labels, maintaining a coherent classification structure.
Creating New Label Groups: For labels that introduce a new category or dimension to the project, establishing a separate label group helps organize the dataset and classification logic.
Establishing Downward Label Hierarchies: When adding labels that necessitate a hierarchical organization (e.g., breeds within the “dog” label), defining new levels in the label hierarchy ensures that the model can learn and distinguish with greater specificity.
Label deletion#
Label deletion is a fundamental aspect of project management, allowing for the removal of labels that are no longer relevant or necessary. Unlike other editing actions that might require intricate data handling to preserve dataset integrity, label deletion is generally straightforward, aligning more closely with the simplicity of label renaming in terms of impact on data consistency and correctness.
From the standpoint of project contributors and managers, removing a label is designed to be as user-friendly and non-disruptive as possible. This ensures that ongoing and future project work is not adversely affected by changes in the classification schema.
Despite the straightforward nature of label deletion, certain rules ensure the structural integrity of the project’s classification framework. These constraints are tailored to the type of project being conducted:
Classification Projects: There must always be at least two labels present in the project. This requirement is based on the foundational principle of classification tasks, which necessitate a choice between two or more categories. Removing labels until fewer than two remain would undermine the project’s ability to classify data meaningfully.
Non-Classification Projects: At least one label must be present. This rule accommodates a wider variety of project types, including detection and segmentation tasks, where a single label can still provide valuable information about the presence or absence of specific features or objects within the data.
Unsupported operations#
The platform supports a wide range of label editing functionalities to streamline project management and improve model training outcomes. However, certain operations remain outside the current scope of support. Understanding these limitations is crucial for planning project workflows and implementing effective workarounds.
Current Unsupported Actions:
Editing Label Group Name: Directly renaming an existing label group is not supported. This restriction may require more careful initial planning of label group names to avoid the need for renaming.
Moving Labels Between Groups: There is no functionality to transfer a label from one group to another directly. The current solution involves deleting the label from its original group and creating a new label within the desired group. This method results in the loss of annotations associated with the deleted label and may affect the deep learning model’s learned patterns related to that label.
Label Addition/Deletion in Anomaly-Type Projects: Anomaly detection projects do not support adding or deleting labels. Given the nature of anomaly detection, which often focuses on identifying deviations from a norm rather than classifying distinct categories, the typical label management operations do not apply.
Adding Label Parents in Hierarchical Projects: The platform currently only supports expanding the label hierarchy downwards by adding child labels. This constraint means you cannot introduce new parent labels into an existing hierarchy, potentially affecting the flexibility to restructure classification schemes as projects evolve.