Collection Campaigns
Submit the most relevant data for labeling
If you have a large corpus of new, unlabeled data, Aquarium's Collection Campaign feature helps you quickly collect the subset you actually want to useβ€”without the need for someone to manually review the full corpus.
Based on a set of difficult edge cases identified in a pre-existing Issue, you can find more examples similar to these. You can then send these examples to a labeling provider, use them to retrain your model, and get the most model improvement for the least labeling cost!

User Guide

This guide runs through the complete flow of setting up a Collection Campaign and collecting new, unlabeled data similar to those you've previously identified in an Issue.
The flow will be something like this:
Collection Campaign Flow

Requirements

In order to successfully create a Collection Campaign, the following requirements must be met:
    This feature will only work on Issues where all of the contained elements come from Datasets and Inference Sets uploaded on or after January 11th, 2021.
    All elements within the Issue must be from the same Dataset or Inference Set.
      NOTE: This means that an Issue can't have an element from a dataset and an element from the dataset's corresponding inference set. Those count as distinct sets.
    ​Embeddings must be generated for the data corpus being searched through.
      Whatever model you use to generate the new corpus's embeddings must be the same model you used to generate the Issue elements' embeddings.
    The data corpus must have its data accessible to Aquarium (URLs, GCS paths, etc.), much like how it currently is for uploaded Datasets and Inference Sets.

1. Start a Collection Campaign (Web App)

In order to start a Collection Campaign, first navigate to the Issues tab in the web app.
Navigate to an Issue that contains the sort of data that you want more examples of. (If you haven't done this yet, create such an Issue.)
For collection campaigns, a well-curated set of issue elements will help you achieve better sampling results.
Ordinarily, this can be a lengthy and tedious manual process, but you can use the process described in Finding Similar Elements Within a Dataset to speed that up.
A further note: your collection campaign will collect the same element type as your seed issue---if your issue is made up of crops, your campaign will collect similar crops. If your issue is made up of frames, your campaign will select similar frames. Both workflows should otherwise operate the same way.
When you go to that Issue's page, you should see a Collection Campaign box in the right panel. Click the Start Campaign within that box, as follows:
Once a collection campaign is created for an issue, you should see a new Collection Samples tab pop up. There won't be any samples displayed, because the Python collection client hasn't run yet.
In the sidebar, you'll also be able to see additional info such as its version and status:

Deactivating and Reactivating Campaigns

If you no longer want the Python client (described below) to collect new samples for that particular Issue, you can click Deactivate Campaign.
The examples previously collected by a deactivated campaign will still be visible, and this campaign can easily be reactivated at any time by clicking Reactivate Campaign.

Set a Sampling Threshold (Optional)

The sampling threshold (default 0.5) allows you to control how "strict" you want to be for a given campaign. During sampling, a similarity score is calculated for each unlabeled dataframe, which determines whether it qualifies for upload. A lower threshold will result in more samples, but also more false positives. You can tune it accordingly to your labeling needs.
Collection Campaigns work as a point-in-time snapshot of all items inside an issue. The Python collection client will search for examples most similar to what was in that snapshot.
If you activate a Collection Campaign and subsequently modify the Issue it is based on (e.g. adding or removing elements), these changes won't be automatically picked up by the Python collection client.
However, a Commit New Campaign Version button will appear. To register any changes, you'll need to click this button:
​

2. Collect and Upload Your Data (Python Client)

In this section, we'll talk about the Setup and API calls necessary to start scanning your local data corpus, and uploading the examples that are most similar to the ones in your active Collection Campaign.
You'll be using Aquarium's Python client to write a script, similar to how you've used it in the past to upload your data.

Initialize Your Collection Client

First, you'll need to initialize a new collection client, much in the way you would initialize an aquariumlearning client when uploading data.
1
import aquariumlearning as al
2
al_client = al.CollectionClient() # Note: This is the new client to use
3
al_client.set_credentials(api_key="YOUR API KEY HERE")
Copied!

Fetch the Latest Collection Campaign Info

One of the first commands to run is syncing state. The following command downloads information to the local client that represents all active Collection Campaigns.
1
al_client.sync_state()
Copied!
NOTE: As the number of items in your Collection Campaigns increase, the amount of data downloaded also increases.
Please ensure that there is sufficient disk space to support your Collection Campaigns.
Optionally, you can constrain sampling to collection campaigns to a specific list of projects:
1
project_names = ["some_project_1", "some_project_2"]
2
al_client.sync_state(target_project_names=project_names)
Copied!
Alternately, if you want to specify individual issues, you can do so as follows:
1
issue_uuids = ["cf8c92f5-e720-47fd-bf8e-ed5b07d47372", "5e8cb31c-9b3e-4a97-89b8-3428543a9778"]
2
al_client.sync_state(target_issue_uuids=issue_uuids)
Copied!

Preprocess Your Data Corpus

Now, you'll need to turn the data corpus you are scanning through into a construct that the client can understand.
Luckily, the client already has a Labeled Frames data type to handle this. You can construct Labeled Frames much like you already do when you upload a dataset.
Unlike before, you'll add them directly to a list, rather than a Labeled Dataset:
1
corpus_of_data_frames = []
2
for item in my_corpus_of_data:
3
# Create a Frame
4
frame = al.LabeledFrame(frame_id=item.frame_id, date_captured=item.date_captured)
5
6
# Add relevant Metadata
7
frame.add_user_metadata("location", item.location)
8
frame.add_user_metadata("vehicle", item.vehicle)
9
10
# Add the actual image url
11
frame.add_image(
12
sensor_id=item.sensor_id, image_url=item.image_url, date_captured=item.date_captured
13
)
14
# Add relevant embeddings
15
frame.add_frame_embedding(embedding=item.frame_embedding)
16
17
# Add the frame to the list of frames
18
corpus_of_data_frames.append(frame)
Copied!

Labels (optional)

Since all the frames in your corpus are labeled frames, labels can also be added to each one if needed.
You will want to add labels (and their corresponding embeddings) if your original issue is made up of crop elements (e.g. bounding boxes).
Adding labels to each frame must use the same task type as the dataset you are running the collection campaign on (e.g. use add_label_2d_classification if your dataset is a 2D classification task, etc.). Added frame labels will be visible in the collection campaign results.
NOTE: It is not currently supported to add confidence to your labels in collection campaign results.
Here are some common label types, their expected formats, and how to work with them in Aquarium:
Classification
2D Bounding Box
3D Cuboid
2D Semseg
2D Polygon Lists
​
1
# Standard 2D case
2
frame.add_label_2d_classification(
3
# The sensor id of the image this label corresponds to
4
sensor_id='some_camera',
5
# A unique id across all other labels in this dataset
6
label_id='unique_id_for_this_label',
7
classification='dog'
8
)
9
​
10
# 3D classification
11
frame.add_label_3d_classification(
12
# A unique id across all other labels in this dataset
13
label_id='unique_id_for_this_label',
14
classification='dog',
15
# Optional, defaults to implicit WORLD coordinate frame
16
coord_frame_id='robot_ego_frame',
17
)
Copied!
​
​
1
frame.add_label_2d_bbox(
2
# The sensor id of the image this label corresponds to
3
sensor_id='some_camera',
4
# A unique id across all other labels in this dataset
5
label_id='unique_id_for_this_label',
6
classification='dog',
7
# Coordinates are in absolute pixel space
8
top=200,
9
left=300,
10
width=250,
11
height=150
12
)
Copied!
​
Aquarium supports 3D cuboid labels, with 6-DOF position and orientation.
1
frame.add_label_3d_cuboid(
2
label_id="unique_id_for_this_label",
3
classification="car",
4
# XYZ dimensions of this cuboid
5
dimensions=[1.0, 0.5, 0.5],
6
# XYZ position of the center of this object
7
position=[2.0, 2.0, 1.0],
8
# An XYZW ordered object rotation quaternion
9
rotation=[0.0, 0.0, 0.0, 1.0],
10
# Optional: If your cuboid is relative to a specific
11
# coordinate frame, you can reference it by name here.
12
coord_frame_id="robot_ego_frame"
13
)
Copied!
​
2D Semantic Segmentation labels are represented by an image mask, where each pixel is assigned an integer value in the range of [0,255]. For efficient representation across both servers and browsers, Aquarium expects label masks to be encoded as grey-scale PNGs of the same dimension as the underlying image.
If you have your label masks in the form of a numpy ndarray, we recommend using the pillow python library to convert it into a PNG:
1
! pip3 install pillow
2
​
3
from PIL import Image
4
...
5
​
6
# 2D array, where each value is [0,255] corresponding to a class_id
7
# in the project's label_class_map.
8
int_arr = your_2d_ndarray.astype('uint8')
9
​
10
Image.fromarray(int_arr).save(f"{imagename}.png")
Copied!
Because this will be loaded dynamically by the web-app for visualization, this image mask will need to be hosted somewhere. To upload it as an asset to Aquarium, you can use the following utility:
1
mask_url = al_client.upload_asset_from_filepath(project_id, dataset_id, filepath)
Copied!
This utility hosts and stores a copy of the label mask (not the underlying RGB image) with Aquarium. If you would like your label masks to remain outside of Aquarium, chat with us and we'll help figure out a good setup.
Now, we add the label to the frame like any other label type:
1
frame.add_label_2d_semseg(
2
# The sensor id of the image this label corresponds to
3
sensor_id='some_camera',
4
# A unique id across all other labels in this dataset
5
label_id='unique_id_for_this_label',
6
# Expected to be a PNG, with values in [0,255] that correspond
7
# to the class_id of classes in the label_class_map
8
mask_url='url_to_greyscale_png'
9
)
Copied!
​
Aquarium represents instance segmentation labels as 2D Polygon Lists. Each label is represented by one or more polygons, which do not need to be connected.
1
frame.add_label_2d_polygon_list(
2
# The sensor id of the image this label corresponds to
3
sensor_id='some_camera',
4
# A unique id across all other labels in this dataset
5
label_id='unique_id_for_this_label',
6
classification='dog',
7
# All coordinates are in absolute pixel space
8
#
9
# These are polygon vertices, not a line string. This means
10
# that no vertices are duplicated in the lists.
11
polygons=[
12
{'vertices': [(x1, y1), (x2, y2), ...]},
13
{'vertices': [(x1, y1), (x2, y2), ...]}
14
],
15
# Optional: indicate the center position of the object
16
center: [center_x, center_y]
17
)
Copied!
​

Assign Similarity Scores to Each Data Corpus Frame

Now that you've transformed your data corpus into a list of Labeled Frames to scan, you'll call two simple API endpoints.
The first API call iterates through each frame in your list, and assigns a similarity score between this frame and each of the active Collection Campaigns. This call does not upload any data:
1
# Can be called any number of times
2
al_client.sample_probabilities(corpus_of_data_frames)
Copied!
If you are dealing with a crop issue, a similarity score will be calculated for each crop in a given sample frame, and the highest scoring crop will be the overall frame's "similarity score".
This means that even if a given sample frame has multiple qualifying "similar" crops, only the most similar crop will appear in the app UI.
Filter and Upload Relevant Examples
The second API call will filter the frames based on an internally calibrated threshold. This threshold is determined as follows:
    If the override_sampling_threshold parameter is specified in thesave_for_collection call, this threshold is used for all of the collection campaigns from the earlier sync_state call
    Otherwise, if a campaign's sampling threshold was specifically configured in the web app, this is the threshold used for that campaign.
    If no override or campaign-specific threshold was set, a default of 0.5 is used.
The frames that meet this threshold are the most similar examples, and will be uploaded back to Aquarium for analysis:
1
# Will upload all frames passing the threshold that had
2
# sample_probabilities called on it since the client was
3
# initialized
4
al_client.save_for_collection()
5
​
6
# Alternately, you can specify an override threshold.
7
al_client.save_for_collection(override_sampling_threshold=0.7)
8
​
9
# Alternately, you can specify a target count. That will attempt
10
# to save up to `target_sample_count` entries, prioritized by
11
# highest similarity score.
12
al_client.save_for_collection(target_sample_count=100)
Copied!
If you want to see what your collected samples look like before actually uploading them, there is a dry_runflag that you can specify:
1
client.save_for_collection(dry_run=True)
Copied!
It will (1) display basic stats and (2) link out to a "preview frame", where a single sample frame is uploaded so you can make sure it looks how you expect (similar to the one used in dataset uploads).

3. View your Collection Campaign (Web App)

Now you can view the collected samples in the web app!
To do so, simply navigate back to the Issue that contains the active Collection Campaign (or refresh the page if you already have it open). New data should've appeared, assuming that your data corpus had examples that passed the similarity threshold.
You can sort the samples according to similarity score or campaign version.

Understanding Why Samples were Selected

If you are using the most recent version of the client, you can now view the cluster of issue elements that a sample was closest to (which may help build intuition on why a sample was selected).
Simply click on the question mark displayed next to a particular sample's campaign version info:

Viewing Collection Rate

Note: Collection rate is not displayed for older collection campaigns---some of the info required to calculate this was not recorded at the time.
In the sidebar, you can see the collection rate of your campaign:
This reports the number of samples uploaded, out of the number of dataframes actually processed by the Python collection client.
Note: Although uploaded samples are deduped bytask_id , the collection client does not dedupe when tracking the number of frames that have been looked at.
Consequently, if you run the collection client over the same (or overlapping) set of unlabeled dataframes, your reported collection rate will be lower than it actually is.

Discarding Bad Samples

To remove samples that don't match what you are looking for, you can select and discard them:

Exporting Samples for Labeling

To export the collected frames, you can click the blue Download button, much like you already do when exporting Issue Elements from Aquarium today.
You can then send these to a labeling provider and use the results to retrain your model. This latest dataset iteration (and corresponding inference set) can be uploaded to Aquarium via the standard data ingestion flow, and you can continue repeating this process to improve your model performance!
Yay positive feedback loops

Unlabeled Indexed In-App Collections

NOTES:
    Similar to with collection campaign uploads, your unlabeled indexed dataset should have region proposals (likely from your model), which are uploaded as "labels".
The previously described collections campaign flow requires periodically running the Aquarium Python client to find potentially relevant samples---however, with this new feature, you can do a one-time upload of your corpus as an unlabeled indexed dataset (the search dataset), and handle the rest of your sampling workflow in the app UI:
    1.
    Identify rare examples of interest from a labeled dataset or inference set (the seed dataset), and add them to an Issue.
    2.
    Choose which unlabeled indexed dataset (the search dataset) to search through
    3.
    Generate similar elements from that unlabeled indexed dataset
    4.
    (Optional) Export the relevant unlabeled examples to labeling
Once your engineering team has uploaded the corpus to Aquarium, this means that your ops team can handle the entire "rare scenario" workflow on their own.

Generating Embedding Versions

In order for similarity search to work, your search dataset and your seed dataset must have compatible embedding spaces. To specify this explicitly, you will be using embedding versions (represented by UUIDs).
To determine the embedding version for your seed dataset, go to the Project Details page and select the Embeddings tab:
Select the name of your seed dataset from the dropdown and click Get Version:
The UUID that appears is the embedding version that you will use in the following section, when uploading an unlabeled indexed dataset via the Python client.
Note that the Get Version button will be disabled if you seed dataset is still post-processing.

Uploading an Unlabeled Indexed Dataset

NOTE: The argument seed_dataset_name_for_unlabeled_search has been deprecated and is no longer supported by our API. You will need to use existing_embedding_version_uuid going forwards.
This is basically the same as uploading a normal labeled dataset, but with the following flags added to the al_client.create_dataset call:
(1) A flag that specifies that the dataset is unlabeled
1
is_unlabeled_indexed_dataset=True
Copied!
(2) The embedding version (see previous section) for the issue that is used for the sampling --- in other words, which dataset the example issue elements are coming from
1
existing_embedding_version_uuid="f700a622-d252-4169-839a-70a3ec6ca741"
Copied!
End-to-end Code Sample
First upload a labeled dataset (see code example). Say it's called fullset_0.
You go to the UI to get the embedding version. For the sake of example, say that it's f700a622-d252-4169-839a-70a3ec6ca741. (For a real upload, replace with the one that you see for your own seed dataset).
Then upload your unlabeled dataset as follows:
1
# Your unlabeled dataset will still be instantiated as a
2
# LabeledDataset :P
3
​
4
search_dataset = al.LabeledDataset()
5
for entry in label_entries:
6
# Create a frame object, using the filename as an id
7
frame_id = entry['file_name'].split('.jpg')[0]
8
frame = al.LabeledFrame(frame_id=frame_id)
9
​
10
# Add arbitrary metadata
11
frame.add_user_metadata('some_field', 'some_value')
12
​
13
# Add an image to the frame
14
image_url = "https://storage.googleapis.com/aquarium-public/quickstart/pets/imgs/" + entry['file_name']
15
frame.add_image(sensor_id='cam', image_url=image_url)
16
​
17
# Add the region proposal as a label
18
label_id = frame_id + '_proposal'
19
frame.add_label_2d_classification(
20
sensor_id='cam',
21
label_id=label_id,
22
classification=entry['class_name']
23
)
24
​
25
# Add the frame to the dataset collection
26
search_dataset.add_frame(frame)
27
28
al_client.create_dataset(
29
aquarium_project,
30
aquarium_dataset,
31
dataset=search_dataset,
32
wait_until_finish=False,
33
is_unlabeled_indexed_dataset=True,
34
existing_embedding_version_uuid="f700a622-d252-4169-839a-70a3ec6ca741"
35
)
Copied!

Running a search in the app

Once your unlabeled indexed dataset has finished uploading, you can run through the following workflow:

Iterative Refinement Collection Campaigns

Users often will want to iteratively refine their search results. A user can start a search with a few "query" samples that they want to look for, run a collection campaign search in their labeled / unlabeled dataset, select + add relevant elements to the issue, and doing another search. By going through multiple iterations of search and refinement, the user can grow the number of relevant examples in the issue and get more relevant collection results after each iteration.
To run an iterative refinement collection campaign:
    1.
    Follow the steps above to generate similar elements from an unlabeled indexed dataset.
    2.
    "Accept" relevant elements returned from the search, and optionally, "Discard" irrelevant elements. "Accepted" elements will be used as seed elements when the similar search is rerun.
    3.
    Click "Recalculate Similar Dataset Elements" to rerun the search to generate new similar elements seeded from the new "Accepted" elements as well as the original issue elements. Elements that have been classified as either "Accepted" or "Discarded" will keep their status when the search is rerun.
    4.
    Repeat as needed.
​
Last modified 19d ago