Skip to toolbar

Community & Business Groups

Classifying Video Training Data For Machine Learning Using WebVMT

A machine learning algorithm needs to be trained to recognise cats and dogs from video footage. The learning process can be accelerated if the training footage is manually tagged to classify timed sections of video when cats and dogs appear, which can be done in a common metadata format with the proposed data sync feature in WebVMT using the following excerpt:

NOTE Cat, top left, after 5 secs for 20 secs

00:00:05.000 —> 00:00:25.000
{“sync”: {“type”: “org.ogc.geoai.catdog”, “data”:
{“animal”:”cat”, “frame-zone”:”top-left"}}}

NOTE Dog, mid right, after 10 secs for 30 secs

00:00:10.000 —> 00:00:40.000
{“sync”: {“type”: “org.ogc.geoai.catdog”, “data”:
{“animal”:”dog”, “frame-zone”:”middle-right"}}}

This approach is applicable to any project using video as input to a machine learning algorithm, regardless of the video encoding format, e.g. MPEG, WebM, OGG, etc. and without modifying the video files themselves.

In addition, video metadata can be exposed in a web browser using the proposed DataCue API in HTML.

Leave a Reply

Your email address will not be published. Required fields are marked *

Before you comment here, note that this forum is moderated and your IP address is sent to Akismet, the plugin we use to mitigate spam comments.

*