Progressive Explanation Generation for Human-robot Teaming. (arXiv:1902.00604v1 [cs.AI])

Generating explanation to explain its behavior is an essential capability for
a robotic teammate. Explanations help human partners better understand the
situation and maintain trust of their teammates. Prior work on robot generating
explanations focuses on providing the reasoning behind its decision making.
These approaches, however, fail to heed the cognitive requirement of
understanding an explanation. In other words, while they provide the right
explanations from the explainer’s perspective, the explainee part of the
equation is ignored. In this work, we address an important aspect along this
direction that contributes to a better understanding of a given explanation,
which we refer to as the progressiveness of explanations. A progressive
explanation improves understanding by limiting the cognitive effort required at
each step of making the explanation. As a result, such explanations are
expected to be smoother and hence easier to understand. A general formulation
of progressive explanation is presented. Algorithms are provided based on
several alternative quantifications of cognitive effort as an explanation is
being made, which are evaluated in a standard planning competition domain.

Source link

Back to top button