Bridging the Gap between
Real World and Simulation
Perception and Action
You shouldn't play video games all day, so shouldn't your AI! Gibson is a virtual environment based off of real-world, as opposed to games or artificial environments, to support learning perception. Gibson enables developing algorithms that explore both perception and action hand in hand.
Gibson Environment is named after James J. Gibson, the author of Ecological Approach to Visual Perception, 1979. Read a relevant excerpt of JJ Gibson's book here. “We must perceive in order to move, but we must also move in order to perceive” – JJ Gibson.
Use our opensource platform to explore active and real-world perception. Above: two agents in Gibson. The agents are active, embodied, and subject to constraints of physics and space (a,b). They receive a constant stream of visual observations as if they had an on-board camera (c). They can also receive additional modalities, e.g. depth, semantic labels, or normals (d,e,f). The visual observations are from real scanned buildings.
Over 1400 floor spaces and 572 full buildings, scanned using RGBD cameras. A State-of-the-art 3D database.
Read more about our neural network based view synthesis pipeline, physics integration, and experimental results.
We propose a synthesis mechanism called "Goggle" for closing the remaining perceptual gap between the virtual environment and real-world.
Checkout our database page for the state-of-the-art 3D model dataset.