2020-12-21

App Improvements

Geodata View

At the request of a few of our customers, we now have support for uploading geospatial metadata and plotting it on a map! If your sensors collect data from different regions, this can help you identify trends in failure patterns.

We also support satellite view, in case your dataset is more dependent on natural features (fields, bodies of water) rather than road features:

NOTE: This view will only be available if you use the latest Python client to tag your frames with EPSG:4326 WGS84 geo data during upload.

>>> frame.add_geo_latlong_data(37.81939245823887, -122.47846387094987)

Because datasets are currently immutable, you will need to re-upload to use this feature for existing datasets.

Query URLs

Now, when you select your dataset, inference set, and filters in the Explore view, these will be saved in your URL—this means that you can bookmark the same query for later (or share it with someone else!).

Your URL will automatically be updated when you update your queries—all you need to do is copy it and save it as needed. (Note that for now, you will still need to hit the search bar when visiting this query URL.)

Expandable Modals in Embedding View

Previously, when you selected elements in the Embedding view, their corresponding images showed up in a mini modal in the upper right corner. Now, we've added the ability to expand that modal into the detailed frame view— just click on the expand icon next to the frame ID!

To navigate through these selected elements, you can use arrow keys the same way that you might in the Grid view.

Deselecting Lassoed Elements in Embedding View

Also, you can now deselect specific elements from your lassoed selection in the Embedding view. If you are lassoing a cluster of elements for investigation, this allows you to filter out the ones that are no longer relevant.

To do this, simply click the minus button in the mini-modal in the upper right corner:

Adjustable Point Size in Embedding View

In the Embedding view, elements can be clustered close together, and a large point size may make it hard to distinguish between them. We've added an Embedding Point Size slider to the Display Settings panel, so that you can adjust this to whatever is visually clearest:

Editable Issue Element Statuses + Filtering

A common workflow we see is (1) using an Issue to "batch" a series of potentially problematic elements and (2) manually reviewing these issue elements in more detail.

To help distinguish between issue elements, we've introduced element statuses. Now you can tag specific issue elements as "Not Started" or "Done":

You can also filter your issue elements by these statuses:

This allows you to clearly identify and isolate "Not Started" elements from "Done" elements. Let us know if there are custom statuses that you would find additionally useful!

Python Client / Data Upload Improvements

Classification-Specific Workflows

A common source of confusion we've seen has been the differences between "Frames" and "Crops", and how these are treated when they are added to issues.

For detection tasks, this is a clearer distinction. "Frame" refers to the entire image, and "crop" refers to a specific part of the image (e.g. bounding box) that has a ground truth or predicted label.

The first image is a frame, the second is a crop. An issue can only contain one type of element (either "frame" or "crop"). A crop element can't be added to an existing issue that contains frame elements.

For classification tasks, this distinction can seem unnecessary because there is a 1:1 relationship between crops and frames (each frame has a single label class associated with it). In this case, it shouldn't matter whether an issue element is added from the "Frame" or "Crop" view.

Now you can explicitly tag your project as a classification task during upload, by setting the primary_task field to CLASSIFICATION.

>>> client.create_project(project_name, label_class_map, primary_task=CLASSIFICATION)

This allows us to treat all of your issues the same (no unnecessary crop vs frame distinction). In the future, this will also allow us to expose more classification-first features.

Classnames Wrapper Function

Previously, in the client, we provided a series of functions (from_classnames_max10 , from_classnames_max20, from_classnames_turbo, and from_classnames_viridis ) to help assign colors to classnames in your Label Class Map (see documentation).

These functions assign colors in a way to make it easy to distinguish between classes in the Embedding view, based on the number of different classes your project has. (Randomly assigning colors, on the other hand, could result in two classes looking indistinguishable in the UI).

To make these helper functions easier to use, we've created a single wrapper function from_classnames that will take care of choosing the right color assignment based on your number of classes.

Example usage:

>>> label_class_map = al.LabelClassMap.from_classnames(human_readable_classes)

Last updated