Study shows: Artificial Intelligence exhibits similar confidence biases to humans

2023-11-30

Obvious confirmation bias is a characteristic of human decision-making and a clear irrational feature that has been replicated and analyzed using artificial intelligence (AI) models. A team led by RIKEN published these unsettling results in Nature Communications, revealing that our inflated confidence may stem from subtle observational cues.


When we catch a glimpse of a familiar object, we are often able to immediately and confidently draw conclusions that match that familiar object, even when the balance of evidence does not support such a high level of confidence. This disconnect between decision-making and confidence has long puzzled researchers, as it suggests that while we are capable of making highly rational decisions, our confidence in those decisions may be quite irrational.


"There has always been a tension between theory and empirical data, with theory straightforwardly assuming that humans are rational, while empirical data clearly shows that this is not always the case," said Hakwan Lau of the RIKEN Center for Brain Science.


This often occurs in situations where the image is unclear. Mathematically, the level of noise in an image can be calculated using a metric called the signal-to-noise ratio, which is derived by comparing the deviation from a clear image.


But things start to get strange here. "If I make the image more prominent and noisier, but keep the overall signal-to-noise ratio the same, we somehow become more confident, believing that we know what we are looking at, even though we are not seeing it more clearly," said Lau. "It turns out that the seemingly random noise structure is actually important."


Now, Lau and his colleagues have used an AI model that specifically reports confidence to study the impact of different types of noise on decision confidence.


"People always want to know what an AI model is doing," said Lau. "But the beauty of artificial intelligence models is that once they learn, we can 'dissect' the model and understand it better."


Surprisingly, the AI model exhibited the same overconfidence bias as humans. As Lau pointed out, this seems to be exactly what it should do.


"In a sense, the model is rational because it learns from the noise structure of natural images, which is different from the standard noise types assumed in signal processing models," said Lau. "It is the learning of the statistical properties of these natural images that leads to these models having these obvious biases."