AI is reshaping industries, economies, and societies at an unprecedented pace. From powering everyday digital assistants to revolutionizing research and decision making, AI’s reach is expanding. However, as technology evolves, our understanding of what it means for AI to be truly intelligent also evolves. To build robust, adaptable, and trustworthy AI, we must look beyond technical achievements and draw from the insights of behavioural science.
Why Now?
Tasks that require more than just speed and size are now being assigned to AI. These tasks demand reasoning, flexibility, and judgment qualities traditionally associated with human cognition. As we uncover glitches, biases, and inefficiencies in even the most advanced AI, it’s clear that we need to learn from human thinking and feeling.
From Quick Responses to Wise Intelligence
Top-notch AI, particularly the large language models (LLMs), excel at delivering instant responses. But when it comes to the slower, reflective kind of thinking? They’re not so hot. That’s where we see the usual suspects like making things up, choking on the unfamiliar, and burning through resources like there’s no tomorrow.
For organizations deploying AI, these challenges have real implications:
- Reliability: Inconsistent results can erode trust and impede progress.
- Efficiency: Sometimes, we overthink the simple stuff or bail too soon on the tough jobs, and that’s just resources down the drain and chances missed.
- Risk Management: Without human oversight, AI can produce suboptimal results, biases, or damage to reputation.
To be a trusty sidekick, AI needs to level up and, at times, safely automate decisions in high-stakes areas, hitting the sweet spot of human cognition fast and slow thinking.
The Behavioural Science Advantage
Through the application of behavioural science, we can create AI that’s not only fast but also wise.
-
Human-Like Reasoning Requires Metacognition
Relying on fast, automatic, and intuitive processing, most AI models today mirror the human brain’s “System 1.” However, robust decision-making also requires “System 2″reflective, deliberate, and analytical reasoning. The true advantage for AI lies in metacognition, the ability to think about its own thinking and choose the right mode for the task.
Consider the surgeon riddle, for example. LLMs can spot the punchline when it’s there, but metacognitive controls could help AI know when to take a shortcut and when to dig deeper.
-
Building a Metacognitive Controller
Envision a metacognitive controller as a savvy companion that always selects the perfect tool for the task. With a nod to behavioral science, we can craft AI that sizes up a problem, spots what it doesn’t know, and opts for the best strategy.
- Quick Fact Check: The controller sends simple queries to speedy, heuristic processors.
- Complex Tasks: It uses structured reasoning and formal checks for more challenging queries.
- Uncertainty: If it’s not sure, it’ll ask for more details or check with external sources.
This clever routing not only boosts accuracy but also saves us from knee-jerk errors and senseless waiting.
-
Resource Rationality: Smarter, Not Just Harder
Efficiency is key, especially when computing resources are limited. AI should focus on smart work, not just hard work.
A recent study, for example, showed that LLMs can sometimes “overthink” simple classification tasks, resulting in less human-like decisions and extended processing times. On the flip side, they may not invest enough effort in more demanding tasks. By embedding resource rationality, an explicit trade-off between expected accuracy and computational cost is made. AI can become more efficient and trustworthy.
-
Rewarding Wisdom, Not Just Outputs
Thanks to extensive human input, AI is trained to produce what we desire. But hey, behavioural science tells us to shake things up. We should be schooling AI in the art of wisdom, being humble, dealing with the unknown, listening to different voices, and knowing when to say, “You know what? You’re the expert here.”
Methods like Meta-Reinforcement Learning (MRL) or Process Reward Models (PRM) can reward these metacognitive skills, encouraging AI to reason wisely—expressing uncertainty when justified, seeking other viewpoints, and challenging its own conclusions.
-
Neurosymbolic AI: Integrating Fast and Slow Thinking
The future of AI may lie in hybrid architectures that combine pattern-matching neural networks (System 1) with rule-based, symbolic systems (System 2). Behavioral science provides a blueprint for how these systems should work together, not as separate entities but as a spectrum with learning flowing both ways.
For example, human expertise involves refining slow, deliberate analyses into fast, intuitive responses. Neurosymbolic AI can use formal models to refine neural “hunches” and, conversely, guide symbolic engines toward promising paths, reducing search burdens and making logic-based reasoning more practical at scale.
As AI’s influence grows, it’s clear that we need to pair it with the wisdom of behavioural science. We must move
Based on the Augment article from BIT.
Explore our AI services and capabilities at Perficient
