Autonomous Vehicles

Create training sets for any kind of deep learning vehicle application by live streaming your video footage onto our platform. Perform augmentations and filters, and directly import your training sets to your machine learning library in real-time. Mimic different scenarios with live streaming. We support endless reading streams that enable real-time training patterns. We also support multiple types of video sources including LiDAR.

Search, browse, and filter hours of video footage with synched telemetry and export selected scenes directly to your server in your preferred machine learning framework. We remove the need to manually modify or clean videos and resynch telemetry data. We also support multiple coding languages.


Filter and search metadata such as geo-coordinates. Specify the entropy for each dataset in terms of colours, brightness correction, etc to remove bias.

Telemetry

We support any kind of telemetry to be imported including accelerators, temperature sensors and gps coordinates. Telemetry data is completely searchable and filterable. CVEDIA provides several ways to annotate or modify telemetry data recorded alongside video footage. Numerical values can be smoothed, remapped or clipped and individual fields can be overridden with custom values.

Generate video clips in real-time from hours of footage and extract scenes of when a certain threshold is reached, or when certain events occur with our MQL (Metadata Query Language) editor. By specifying custom criteria in a flexible structured language you can dynamically generate video clips out of larger video recordings. For example, listing only clips that show a certain amount of steering momentum on an autonomous vehicle.

Video Cutting

Our tools for video editing free you from the need to manually create clips out of long recordings and aligning the associated telemetry. Our non-destructive annotation tool lets you dynamically create clips by setting start and end positions, marking sections as usable/unusable, adding labels to sequences and flagging events.


These annotations can be directly used in our MQL editor to dynamically create clips based on various conditions.

We provide full integration with LiDAR data. We ingest raw point cloud data single images or video footage for machine learning. Select segments of it and export it as a 2D raster from the last return.

Annotation Types We Support

Some of our more popular annotation tools used specifically for autonomous vehicle applications include singular and multi-layer labeling, bounding box tracking, and segmentation maps.

Labeling

Our labeling system is highly configurable and multi-layer aware. This means whether you are cropping, labeling or producing segmentation maps you will have the ability to do this in multiple layers and select which one(s) to export in the end. Related labels that are grouped together can be assigned as a whole to a single annotation layer. For example, you can produce a semantic segmentation map of a street view in one layer and have objects masses and direction vectors in the 2nd and 3rd layer.


Labels also serve as templates where you can pre-define additional attributes to be bestowed on an annotated image or polygon. During export, these values can be included as individual fields or combined to form multi-layer segmentation maps.

Segmentation Maps

The easiest method for creating segmentation map annotations is through polygon partitioning. By drawing intersecting splines on top of your source imagery you are effectively cutting the predefined area into many sub-polygons (aka segments). This is a highly efficient method of generating Pixel-dense segmentation maps, as the class for every single pixel is defined from the start, meaning you will not have to manually align the polygons for 2 segments running parallel to each other.


Alternatively, if you are not looking for Pixel-dense representations, but instead want to annotate only certain types of objects, you will have the ability to draw closed polygons. Since these often encompass a whole object including its occluded parts (e.g. car behind a lamp post) we have added the ability to control the Z-index (depth) of the objects. The Z-index defines which segment is closer to the camera and should be rendered last when outputting segmentation map images.

Bounding Boxes

In many ways bounding boxes have similar traits to polygons found in segmentation maps. The main difference is the fact they are constrained to either rectangular or square shape. On export they become interchangeable and polygons can automatically be enclosed by a bounding box fitting the polygon's maximum extents. The residual area outside the polygon can be filled through various methods: gaussian noise, background, solid color, opacity etc.


2D bounding boxes are created by first setting the aspect ratio constraints (e.g. 1:1 for square) and dragging the box on top of your image. Multiple labels and attributes can be automatically assigned to a single bounding box by using the label templates or multi-selecting labels. The entire process can be performed in multiple freely definable layers.


Bounding boxes are treated by our system as unique new images and can therefore undergo the same annotation as the original source image. For example, a cropped bounding box on a satellite image or whole slide image can be used as a starting for a segmentation map.

Bounding Box Tracking

Moving objects in videos featuring static scenes like CCTV camera feeds can be easily annotated by manually aligning their bounding boxes over the course of several seconds to minutes. This lets the platform interpolate the intermediary frames and automatically generate annotations for them.

Please contact us for more information for tailored solutions.