These are the highest level grouping in Aquarium, and can be compared to "spaces" in other systems. A project is expected to contain data for the same core task. For example, if you have a model doing semantic segmentation and another doing object detection, they would be in different projects.
These define how a Project interprets classification and label information for its data. The simplest label class maps define a list of valid classes, and what color to present them with.
More complex class maps can define more complex behaviors such as:
mappings between labeled classes and a subset that are inferred by the model
label classes that should be ignored during metrics computation
This is one logical "frame" of data, such as an image from a camera stream. They can contain one or more media/sensor inputs, zero or more ground truth labels, and arbitrary user provided metadata.
This contains inferences for one logical "frame" of data, which can be evaluated against the ground truth labels from the Labeled Frame that shares the same frame id.
This term is only really relevant for image-detection-based tasks (where frame = image). Sometimes referred to as a "label" (words are hard!), "crop" refers to a specific part of the image (e.g. bounding box) that has a ground truth or inference label.
The first image is a frame, the second is a crop.
In the case of image classification, this distinction between crop/frame is unnecessary because there is a 1:1 relationship between crops and frames (each frame has a single label class associated with it).
One of the unique aspects of Aquarium is its utilization of neural network embeddings to help with dataset understanding and model improvement. You may also see embeddings referred to as "features" extracted by neural networks.
Neural network embeddings are a representation of what a deep neural network “thought” about a piece of data (from imagery to audio to structured data). This can be encoded in a relatively short vector of floats.
With this embedding information, Aquarium can provide visualizations of your data distribution, as well as features like "related images" (identifying frames in your dataset that are most similar to one another).
You can either (1) provide your own embeddings or (2) have Aquarium generate them on your behalf. Note that (2) is only possible when your frame data is provided as a raw image. See the dedicated embeddings docs for more details.
These are arbitrary groupings of Issue Elements, which are either frames or labels in a dataset or inference set. Common examples of issues are: "Poorly labeled stop signs," "Out of focus images," "Difficult examples," etc.