Imint’s Vidhance Algorithms Could Soon Replace Live Video Producers

A few months back, we looked at Iminit‘s video stabilization technology; now the Swedish video technology wizards are back with another demo of what its final product will be able to do with basic smartphone video. Instead of just stabilizing the video, Imint’s defense-developed auto-zoom algorithms can now also automatically track and zoom in on the most interesting parts of the video.

Just like all of Imint’s technology, auto-zoom is the result of the company’s work on defense technology. When you are using a drone, after all, you want to be able to easily track that truck of insurgents (and maybe even fire a missile at them — at which point you put in a little prayer for them to be insurgents and not just a rowdy family on a weekend trip). The technology is also used in search-and-rescue operations, though, and during basic surveillance missions.

As Imint CEO Andreas Lifvendahl tells me, this is something that’s really hard to build. “The problem is harder to solve than it might sound, especially in real time,” he said. “Simple versions exist for static, fixed-mounted surveillance cameras — where it is just a question about detecting changes between frames — but in our case we need to detect the significant motion in three possible levels of motion patterns: moving background (as a result of the camera movement or manual zooms); motion in background objects (waves on the oceans, leaves rustling in the wind, etc.); and the actual motion of the objects you want to detect (and subsequently track).”

composing

The team is still working on improving the technology (to be able to predict object motion and direction, for example) and trying to figure what exactly the user interface for this tool would look like. But even in its current state, Lifvendahl is confident that the automated result is often better than what you could accomplish yourself — even with a very good camera.

The team is now working on figuring out how to use its technology to automate collaborative filmmaking.

“We have seen efforts in this direction, but we are certain we can take this many steps forward,” Lifvendahl tells me. “The quality and motion metadata can be used for a ‘cloud producer’ functionality, when several smartphone users team up to become contributors to one live stream (or one final merged clip). The contributors would provide video streams and metadata, and cues to the cloud director (“I have something good here, let me go live”) and the automated cloud producer would decide on what goes live and not, and can also send cues to the contributors (“you are going live in 3.2.1…”) or control functionality from the contributors, like enabling auto-zoom.”

He believes that as long as you allow for a lag of ten to twenty seconds, an algorithm should be able to come pretty close to what a human video producer can accomplish.

Imint plans to enable auto-zoom and, later, the live video-editing feature in its SDK, which is currently being tested by a number of companies. Hopefully, the company will soon be able to announce when (and in which devices) users will be able to use its technology. Assuming the real-life experience lives up to the company’s promises, this has the potential to quickly change how we think about and record video on our smartphones.