Cognitive Mechanisms for Calibrating Trust and Reliance on Automation
Trust calibration for a human-autonomy team is the process by which a human adjusts their understanding of the automation’s capabilities; trust calibration is needed to engender appropriate reliance on automation. Herein, we develop an Instance-based Learning Theory ACT-R model of decisions to obtain and rely on an automated assistant for visual search in a UAV interface. We demonstrate that model matches well the human predictive power statistics measuring reliance calibration; we obtain from the model an internal estimate of automation reliability that mirrors human subjective ratings. Our model is a promising beginning toward a computational process model for trust and reliance for human-machine teaming.
Thank you for giving a nice talk on the exciting topic. I'm curious about this topic because factors such as prediction and emotion seem to be involved. In this context, I'd like to know your ideas on the advantage of IBLT to represent this phenomenon. I think it's possible to use other learning methods such as reinforcement learning or utility le...
Hi Leslie! Nice talk! I was wondering whether it would be possible to examine whether a group of individuals who relies more on automation also shows model performance metrics in the same direction; in other words: can your model explain individual differences? If not, what would be needed?
I couldn't find a paper to go with your presentation.
this work might be of interest to you on this topic Gao, J., & Lee, J. D. (2006). Extending the decision field theory to model operators' reliance on automation in supervisory control situations. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 36(5), 943-959.
I don't find the video for this talk, why is it missing?