2021-05-26

General

Webhooks

Webhooks for notification events has been added! You can read more about how to use them here.

Visible Bounding Box Class Names

The class name of the bounding box can now be rendered alongside the box to better visualize your bounding box data.

This can be controlled using a toggle in the display settings:

Embedding View

Highlighted Similar Elements

The 2D embedding view is best for understanding the overall topology and structure of your dataset—but the distance between points can be misleading.

To help you better understand the notion of "similarity", you now have the option of highlighting the most "similar elements" to a selected datapoint in the embedding view:

The "similar elements" will appear brighter than other lasso-ed elements. In the upper left preview pane, you can toggle this option on and off.

Class Visibility Toggles

In the display settings, you can now control class visibility in the embeddings view:

The embeddings view will filter out the classes you have disabled in the display settings (and it works on all visualization types):

Issues

Comments and Mentions

You can now make comments on issues or issue elements:

You can also mention others in your organization to notify them of an issue or issue element that you would like them to look at:

To enable/disable in-app notifications or emails for mentions, there is a new option added to your user settings:

Labeling Export

Instead of downloading issues as JSONs and CSVs and then scripting around them, users can now directly integrate Aquarium issues with their labeling providers. By setting up a webhook integration, users can now click a button in the Aquarium UI and directly submit data to their labeling system. See our docs for more details!

If you've identified a few problematic elements in an Issue, you may want an easy way to "grow" the issue by finding other similar elements within that same dataset.

Once an issue is created, you can generate similar elements by going to the Similar Dataset Elements tab and clicking the Calculate Similar Dataset Elements button as follows:

See the docs for more details.

Additional Download Metadata

Previously, if you were viewing an inference set, we would attach special metadata to issue elements that were created from either (1) the confusion matrix view or (2) the confusion coloring embedding view.

This info would be attached to each issue element, and look something like the following:

    "label_metadata": {
      "confidence": 0.99923337,
      "confidenceThreshold": 0.1,
      "gtLabelClassId": 1,
      "gtLabelClassification": "car",
      "gtLabelId": "123_gt",
      "infLabelClassId": 2,
      "infLabelClassification": "bike",
      "infLabelId": "123_pred",
      "iou": 0.9716007709503174,
      "iouThreshold": 0.5
    },

Now whenever you download issue elements from an inference set, we automatically attempt to match ground truth labels and inference labels, so that we can attach the associated metadata to each element.

Collection Campaigns

Specify Target Campaign by Issue UUID

Now you can run your collection client for specific active target campaign(s) using thetarget_issue_uuids argument as follows:

# Source issue UUIDs for the campaigns you care about
target_issue_uuids = ["some-issue-uuid"] 
client.sync_state(target_issue_uuids=target_issue_uuids)

View Collection Rate

NOTE: This feature is only available in the app for newer collection campaigns.

For a given collection campaign, you can now view the number of unlabeled dataframes processed vs. sampled.

See docs for more details.

Display Sample's Similar Elements

NOTE: This feature is only available in the app for newer collection campaigns.

To better understand why a particular sample was selected for a collection campaign, you can now view which "seed issue elements" from the campaign it was most similar to:

See docs for more details.

Target Sample Count

In older collection clients, you could specify which samples to upload based on a "similarity score" threshold.

However, instead of selecting all samples above a threshold, you may want a specific number of samples to collect. Now you can do so as follows:

al_client.save_for_collection(target_sample_count=100)

This will upload up to target_sample_count samples, prioritized by similarity score.

Dry Run

If you want to see what your collected samples look like before actually uploading them, there is a dry_runflag that you can specify:

client.save_for_collection(dry_run=True)

It will (1) display basic stats and (2) link out to a "preview frame", where a single sample frame is uploaded so you can make sure it looks how you expect (similar to the one used in dataset uploads).

Last updated