Neural Re-Simulation for Generating Bounces in Single Images

ICCV 2019

Publication date: October 28, 2019

Carlo Innamorati, Bryan Russell, Danny Kaufman, Niloy J. Mitra

We take as input a single still image depicting a scene and output a video depicting a virtual object dynamically interacting with the scene through bouncing. Here, we consider a ball as our virtual object. We achieve this by our Dynamic Object Generation Network which takes as inputs estimated depth and an initial forward trajectory of the virtual object from the PyBullet and outputs a 'corrected' trajectory via a neural re-simulation step. To visualize all the trajectories in this paper, we composited the virtual object at each time step with the input image; warmer colors indicate earlier time steps.

Learn More