3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators

CoRL 2020

Hsiao-Yu Fish Tung*, Zhou Xian*, Mihir Prabhudesai, Shamit Lal, Katerina Fragkiadaki
Carnegie Mellon University


3D-OES predict 3D object motion under agent-object and object-object interactions, using a graph neural network over 3D feature maps of detected objects. Node features capture the appearance of an object node and its immediate context, and edge features capture relative 3D locations between two nodes, so the model is translational invariant. After message passing between nodes, the node and edge features are decoded to future 3D rotations and translations for each object.



Abstract

Humans can effortlessly predict how a scene changes approximately as a result of their actions. Such predictions, which we call mental simulations, are carried out in an abstract visual space that is not tied to a particular camera viewpoint and do not suffer from occlusions: even if a mug is placed behind a bottle from our viewpoint at a certain moment, the mug persists in our mind. Inspired by such capability to simulate the environment in a viewpoint- and occlusion- invariant way, we propose a model of action-conditioned dynamics that predicts scene changes caused by object and agent interactions in a viewpoint-invariant 3D neural scene representation space, inferred from RGB-D videos. In this 3D feature space, objects do not interfere with one another and their appearance persists across viewpoints and over time. This permits our model to predict future scenes by simply "moving" 3D object features based on cumulative object motion predictions, provided by a graph neural network that operates over the object features extracted from the 3D neural scene representation. Moreover, our 3D representation allows us to alter the observed scene and run conterfactual simulations in multiple ways, such as enlarging the objects or moving them around, and then simulating the corresponding outcomes. Our model's mental simulations can be decoded by a neural renderer into 2D image projections from any desired viewpoint, which aids interpretability of the latent 3D feature space. We demonstrate strong generalization of our model across camera viewpoints and varying number and appearances of interacting objects, while also outperforming multiple existing 2D models by a large margin. We further show effective sim-to-real transfer by applying our model trained solely in simulation to a pushing task in a real robotic setup.




Experimental Results




Collision-free pushing on a real-world setup. The robot pushes a mouse to a target location without colliding into any obstacles with 3 push attempts, planned using our model.



Neurally rendered simulation videos from three different views. Left: groundtruth simulation videos from the dataset. The simulation is generated by the Bullet Physics Simulation. Right: neurally rendered simulation video from the proposed model. Our model forcasts the future latent feature by explicitly warping the latent 3D feature maps, and we pass these warped latent 3D feature maps through the learned 3D-to-2D image decoder to decode them into human interpretable images. We can render the images from any arbitrary views and the images are consistent across views.



Neurally rendered simulation videos of counterfactual experiments. The first row shows the ground truth simulation video from the dataset. Only the first frame in this video is used as input to our model to produce the predicted simulations. The second row shows the ground truth simulation from a query view. Note that our model can render images from any arbitrary view. We choose this particular view for better visualization. The third row shows the future prediction from our model given the input image. The following rows show the simulation after manipulating an objects (in the blue box) according the instruction on the left most column.

Video


Corl 2020 Spotlight Talk




Paper



Code (coming soon)