Authors:
Masaru Shirasuna, Hidehito Honda, Rina Kagawa
(白砂大、本田秀仁、香川璃奈)

Title:
Dilemma between bias interaction and trustworthy AI in human–AI collaborated judgments

Journal(書誌情報):
Computers in Human Behavior: Artificial Humans

doi:
10.1016/j.chbah.2026.100314

論文URL:
https://www.sciencedirect.com/science/article/pii/S2949882126000654

Abstract:
In today’s world, humans collaborate with artificial intelligence (AI) and often make judgments with the help of decision support AI. Generally, humans often show cognitive biases in their judgments; and sometimes AI systems also show biased judgments. However, it remains largely unknown how human cognition interacts with AI’s judgments. We hypothesized that even AI with biases, especially in the opposite direction to human biases, could improve judgment accuracy to the same extent as AI without biases. This is because the AI’s bias will cancel out individual’s bias (e.g., humans’ overestimation bias is corrected by AI’s underestimation bias). On the other hand, such corrections will also introduce a dilemma: Humans may distrust AI judgments that deviate from their own, even if accepting such AI would be beneficial. We investigated these issues using a simple perceptual judgment task. First, a theoretical computer simulation showed that optimal AI assistance depended on individuals’ biases. Second, two behavioral experiments demonstrated that AI with biases in the direction opposite to participants’ biases tended to improve participants’ accuracy, as well as AI with no biases. However, participants tended to evaluate such AI as being less trustworthy. Our findings raise questions about conventional beliefs that AI with greater accuracy and trustworthiness can provide better decision support, and emphasis on the importance of considering human cognition in designing AI systems. The practical implications for achieving better human-AI collaborations are discussed.

著者Contact先の email:
m.shirasuna1392[at]gmail.com([at]を@に変更してください。) (静岡大学・白砂大)

日本語によるコメント(オプション,200-300字で)
「意思決定支援AIにもあえて人とは逆方向のバイアスを持たせることが、判断の正確さを高めるうえで有効である」という可能性を検証しました(例: 過大評価バイアスを持つ人に、過小評価バイアスを持つようにふるまうAIを呈示する) 。計算機シミュレーションと行動実験の結果、その予想は支持された一方で、人は自分とは異なるバイアスを持つAIを「信頼できない」と評価しやすいことも確認されました。本研究は、自分と異なる意見を示すAIも時には受け入れることの重要性、それに人の認知的側面を考慮して意思決定支援AIを設計することの重要性を示唆しています。