Before the vacation was a made dash to finish a number of conference submissions, including two for HFES 2017. Hoping these will get in so that I can see my Human Factors Colleagues in Austin in October. Details below.
What does a robot look like?: A multi-site examination of user expectations about robot form
Draft Abstract:
Robot design is a critical component of human-robot interaction. The appearance of a robot shapes people’s expectations of that robot, which in turn affect human-robot interaction overall. This paper reports on an exploratory analysis of 155 drawings of robots that were collected across three studies at different sites in two different countries. The purpose was to gain a better understanding of people’s a priori expectations about the appearance of robots across a variety of robot types (e.g., household, military, humanoid, general, and AI). Overall, findings suggest that people’s visualizations of robots have common features that can be grouped into six broad factors. Further, people seem to group robots into two general categories, one representing human-like robots intended for household and similar applications, and the other representing more machine-like robots intended for military and similar applications.
Robot Self-Assessment and Expression: A Framework
Draft Abstract:
Future autonomous robots that operate in teams with humans should have capabilities that facilitate implicit communication in the form of emotional expressions. These emotional expressions should be presented clearly to the human in order to promote adequate understanding of robot behaviors and intent. In this paper, we present a Robot Self-Assessment and Expression framework derived from Operant Conditioning, the reinforcement theory of motivation and the current state-of-the-art in machine learning., The proposed framework attempts to was proposed in the field of psychology to address the way we, and other intelligent creatures, learn within our environment. This theory states that we learn via receiving rewards and punishment for our actions and that our motivation for learning stems from our need to obtain as much reward as possible. In the newer field of machine learning, Markov decision processes have been used to model action and reward features of reinforcement learning to build internal state machines for artificial intelligence. Falling under the purview of artificial intelligence are complex, autonomous robots which are being incorporated into military teams. When it comes to teamwork, understanding our fellow teammates is a high priority. This is particularly true when instructing a robot teammate on how to perform a novel task. In this paper we propose a cybernetic framework to address address which transparency behavior a robot how a robot could display emotional expressions should display depending on both predicted outcomes and actual outcomes of a task. The end goal will be to obtain the most optimal anticipatory guidance and performance feedback from a human instructor. Future research and areas for testing and validation of the framework are discussed.