Learning in navigation: Goal finding in graphs

Peter Cucka, Nathan S. Netanyahu, Azriel Rosenfeld

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

A robotic agent operating in an unknown and complex environment may employ a search strategy of some kind to perform a navigational task such as reaching a given goal. In the process of performing the task, the agent can attempt to discover characteristics of its environment that enable it to choose a more efficient search strategy for that environment. If the agent is able to do this, we can say that it has "learned to navigate" - i.e., to improve its navigational performance. This paper describes how an agent can learn to improve its goal-finding performance in a class of discrete spaces, represented by graphs embedded in the plane. We compare several basic search strategies on two different classes of "random" graphs and show how information collected during the traversal of a graph can be used to classify the graph, thus allowing the agent to choose the search strategy best suited for that graph.

Original languageEnglish
Pages (from-to)429-446
Number of pages18
JournalInternational Journal of Pattern Recognition and Artificial Intelligence
Volume10
Issue number5
DOIs
StatePublished - Aug 1996
Externally publishedYes

Keywords

  • Goal finding
  • Learning
  • Navigation
  • Random graphs

Fingerprint

Dive into the research topics of 'Learning in navigation: Goal finding in graphs'. Together they form a unique fingerprint.

Cite this