Audio-Visual Waypoints for Navigation. (arXiv:2008.09622v1 [cs.CV])

In audio-visual navigation, an agent intelligently travels through a complex,
unmapped 3D environment using both sights and sounds to find a sound source
(e.g., a phone ringing in another room). Existing models learn to act at a
fixed granularity of agent motion and rely on simple recurrent aggregations of
the audio observations. We introduce a reinforcement learning approach to
audio-visual navigation with two key novel elements 1) audio-visual waypoints
that are dynamically set and learned end-to-end within the navigation policy,
and 2) an acoustic memory that provides a structured, spatially grounded record
of what the agent has heard as it moves. Both new ideas capitalize on the
synergy of audio and visual data for revealing the geometry of an unmapped
space. We demonstrate our approach on the challenging Replica environments of
real-world 3D scenes. Our model improves the state of the art by a substantial
margin, and our experiments reveal that learning the links between sights,
sounds, and space is essential for audio-visual navigation.

Source link

Comments are closed, but trackbacks and pingbacks are open.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy