Ever look at a photo and wonder how fast the water was rushing, or if the trees were swaying?
Well now, this “deep learning” technology wants to give you the answer. It’s bringing still images to life… turning them into pictures that move.
Wait… doesn’t this already exist? Aren’t those called videos?!
No, we’re not talking about videos. This is much more interesting (and new)!
This deep learning method is able to animate static images, making things like waterfalls or clouds within a photo move in a natural way.
For example, if you took a picture of Niagara Falls you’d be able to see the water crashing down. Everything else in the image would be still and the water would be flowing down to recreate that part.
This technique works by creating a short video loop that gives the illusion of endless movement.
Lead author Aleksander Hołyński, a doctoral student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington stated:
“What’s special about our method is that it doesn’t require any user input or extra information. All you need is a picture. And it produces as output a high-resolution, seamlessly looping video that quite often looks like a real video.”
Snap a pic, play a video
Hołyński states this type of method of creating movement in a photo is like trying to predict the future.
The system first predicts how things were moving when the picture was taken. It then tries to use that information to create an animated version of a still environment.
To give it the power of prediction, researchers “trained” the system using thousands of videos consisting of rivers, oceans, and other scenes with movement.
Over time, this network was able to identify clues from photos and was able to predict what would happen next. The system then used that information to figure out how each pixel should move.
Next, researchers created “symmetric splatting” – a method that “predicts” the future and the past, then turns it into one continuous movement.
Hołyński says, “We integrate information from both of these animations so there are never any glaringly large holes in our warped images.”
Predicting the future of this technique
So far, this method works best for things that have a predictable fluid motion. As of right now, it can’t predict reflections or how water is distorted when there’s something beneath it.
Hołyński says, “We’d love to extend our work to operate on a wider range of objects, like animating a person’s hair blowing in the wind.”
The technique will be presented at an upcoming conference on Computer Vision and Pattern Recognition.
What do you think about this type of prediction-based technology? Could it be applied to other useful things?
Let us know your thoughts down below!