Object Permanence allows people to reason about the location of non-visible objects, by understanding that they continue to exist even when not perceived directly. Object Permanence is critical for building a model of the world, since objects in natural visual scenes dynamically occlude and contain each-other. Intensive studies in developmental psychology suggest that object permanence is a challenging task that is learned through extensive experience. Here we introduce the setup of learning Object Permanence from labeled videos. We explain why this learning problem should be dissected into four components, where objects are (1) visible, (2) occluded, (3) contained by another object and (4) carried by a containing object. The fourth subtask, where a target object is carried by a containing object, is particularly challenging because it requires a system to reason about a moving location of an invisible object. We then present a unified deep architecture that learns to predict object location under these four scenarios. We evaluate the architecture and system on a new dataset based on CATER, with per-frame labels, and find that it outperforms previous localization methods and various baselines.
|Title of host publication||Computer Vision – ECCV 2020 - 16th European Conference, Proceedings|
|Editors||Andrea Vedaldi, Horst Bischof, Thomas Brox, Jan-Michael Frahm|
|Publisher||Springer Science and Business Media Deutschland GmbH|
|Number of pages||16|
|State||Published - 2020|
|Event||16th European Conference on Computer Vision, ECCV 2020 - Glasgow, United Kingdom|
Duration: 23 Aug 2020 → 28 Aug 2020
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Conference||16th European Conference on Computer Vision, ECCV 2020|
|Period||23/08/20 → 28/08/20|
Bibliographical noteFunding Information:
Acknowledgments. This study was funded by grants to GC from the Israel Science Foundation and Bar-Ilan University (ISF 737/2018, ISF 2332/18). AS is funded by the Israeli innovation authority through the AVATAR consortium. AG received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant ERC HOLI 819080).
© 2020, Springer Nature Switzerland AG.
- Object Permanence
- Video Analysis