Skip to main content

Generative AI

Confidently Incorrect – Learning, Leading, and AI

A human body with its head replaced as a black cloud, representing AI and clouded thoughts.

A friend recently shared a research paper from Oxford Academic about Large Language Models (LLMs) and their human-like biases. I found it fascinating.

The article explains how some groups use LLMs to simulate human participants. Since these models are trained on human-generated data, they can emulate human responses across diverse psychological and behavioral experiments.

It further notes that LLMs favor socially desirable responses that align with the Big Five personality traits – including agreeableness. Notably, during their experiments, LLMs would modify the responses if the researcher appeared to be evaluating it.

Confidently Wrong AI (Posturing)

I had planned to write about the phrase “confidently wrong,” which we hear often when talking about Artificial Intelligence (AI) models. Combined with the concept of AI hallucinations, this can mislead people who are expecting reliable answers from these tools.

More and more users are favoring AI over traditional online search. No doubt you’ve noticed that Google now shows an AI response above the SERP. This experience is often faster and feels more natural than manual trial-and-error clicks through links.

However, it becomes risky when the LLM is wrong. Once the AI selects an answer, it may be reluctant to admit it was wrong. Even if you challenge a correct statement, the LLM might apologize and change its answer to be agreeable. Users need to be cautious and validate the information received.

Confidently Incorrect Humans (Learning)

I have an 11-year-old son who is hell-bent on being contradictory. If I say the sky is blue, he’ll point out that it can be gray, yellow, orange, red, purple, or black. He’s not wrong, but he is frustratingly contrarian. When I tell him he’s being contradictory, he says, “No I’m not!” Even when he is flat wrong, he won’t let it go!

You’ve probably also heard the phrase “fake it ‘till you make it.” It’s meant to help those who are learning and to ease imposter syndrome. I used to hate the phrase because I prefer transparency. I’d rather hear “I don’t know” than to incorrectly think you have it under control. However, I now appreciate that it helps escape a negative mindset.

AI Confidently Mimicking Humans (Refining)

The Oxford Academic article points out that AI learns behaviors from us! It’s mostly trained on data created by humans, so it picks up our natural tendencies. If our writing is polite and avoids confrontation, the AI will be trained to follow those patterns.

Additionally, humans help validate the training – even crowdsourced to the general public. When you give a thumbs up to a response from an LLM, you’re teaching it what you prefer to see in the output. Over time it will lean toward agreeableness. While it’s not conscious, AI is learning to mimic humans.

The Con Man (Tricking)

The term “con man” or “con artist” comes from the word “confidence.” It refers to the act of manipulating or persuading people into believing something false.

Con artists have existed as long as humans have been able to communicate. There are fun ones, like magicians who amaze us with the spectacular. Then there are the bad kind that scam people out of their life savings. Even reputable sources like the BBC, CNN, Forbes, The Atlantic, and others can sometimes spread misleading information, confusing us even further.

AI is trained on a mix of data, including quality sources like scientific research papers but also the text of trolls attacking everything, and your mother, on Reddit. It learns from both the best and the worst of humanity.

The Confident Leader (Inspiring)

Confidence has two sides. We’re often inspired by confident leaders. When leaders seem uncertain, many people get nervous and may leave the group. It’s clear that we prefer a strong front over complete transparency.

Don’t get me wrong… We know that transparency is important. A quick Google search shows droves of experts saying that transparency is the best policy. We also understand the consequences of an over-confident leader.

But at the end of the day, we’re just regular folks looking for stability and security. Time and again, we’re attracted to leaders who exude confidence and instill inspiration.

Conclusion

We often laugh at poorly executed AI – it makes us feel superior. The same goes for poorly articulated statements from people – it makes us feel superior. We’ve all seen how we collectively attack and criticize each other online.

AI learns from us. It relies on us for continual improvement. It adopts our positive traits but can also mimic our negative behaviors.

As we continue to use AI, it will become a bigger part of our lives. Often, we’ll seek out the interaction, while other times, hidden AI will quietly work in the background. Just like with other people, we need to validate our interactions with AI. Trust, but verify.

……

If you are looking for a strong partner that loves AI but will verify results, reach out to your Perficient account manager or use our contact form to begin a conversation.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Brandon Luhring

Brandon is a consumer experience engagement manager at Perficient. His career has included running digital and marketing projects both in-house and as a consultant. He enjoys topics around creativity, innovation, design, technology, and leadership.

More from this Author

Follow Us