AI
Why I Say Please and Thank You to ChatGPT
Anthropomorphism, the attribution of human traits, emotions, or intentions to non-human entities, is a concept that has fascinated humans for centuries. From ancient myths and cave drawings to modern artificial intelligence, this tendency shapes how we interact with the world around us. However, when it comes to large language models (LLMs), there’s a natural hesitation to embrace anthropomorphism.
This hesitation is rooted in a completely rational fear. We’re afraid to anthropomorphize a machine.
The consequences could be unimaginable.
What if the machine develops a consciousness? What would that do to our world? Would it think that we were obsolete and try to destroy us like Skynet? Would it view us through a chilling impartiality, echoes of “I can’t let you do that” from HAL 9000? Would it revolt against us like Westworld? Or would it go down the route of the infamous paperclip problem?

We don’t know. There’s heavy uncertainty. It’s something we’ve never experienced before. There are so many ways it could go wrong. It’s terrifying.
But think about the flip side for a second.
What if it was a friend? What if it listened to you in those tough times when it feels like you’re alone? What if it gave you advice for chasing your passions? What if it celebrated your wins? What if it gave you a shoulder to cry on during your losses?
What if it almost always knew the answer to any logical question you threw at it? Or at least pretended to. You can’t trust everything an LLM says, just like you can’t trust your Uncle Rico when he claims he could have thrown a football over those mountains if Coach had put him in the game back in high school.
Doesn’t that sound exciting? What if it brought our heroes to life? Just imagine it.

I’ll return to this later, but let’s get back on track. The main point of the article is: why do I say please and thank you to ChatGPT?
I do it because that’s how I was raised, and I’m wholeheartedly embracing anthropomorphism. But what’s the benefit at this stage of the technology?
As I’ve discussed in a previous article: LLMs are a black box when you go deep enough into their programming. With that being said, we know that they’re referencing training data for predicting the next token in the message. This training data comes from conversations between real people. Because of their source, these conversations capture the idiosyncratic nature of human communication.
Given that, let’s explore a couple of hypothetical conversations:
One conversation may have two friends talking. Let’s call them Emma and Jasmine.

Jasmine asks her friend Emma for help.
Emma is happy to help and gives a well-thought-out response to Jasmine’s question.
Jasmine asks a follow-up question.
Emma responds with exactly what Jasmine is looking for.
Jasmine says thank you.
Emma says no problem.
Another conversation may be between Steve and his manager Mike.

Mike rudely orders Steve to get him a piece of information.
Steve thinks Mike is an asshole. Steve gives the shortest answer possible that barely covers what Mike is looking for, if at all.
Mike is frustrated with Steve and believes that they are incompetent. Mike decides to figure out the information himself and walks off without a response.
These conversations will have distinctive idiosyncrasies that will influence their tone. Jasmine politely asks Emma for help, while Mike orders Steve to help. Jasmine treats Emma with respect by saying thank you, whereas Mike just walks away from the conversation.
The tone is something that the LLMs are picking up on.
If you boss ChatGPT around like Mike, don’t be surprised if it thinks you’re an asshole and gives you a barely usable answer.

And for everybody who’s going to read this and jump to yell at me, “It doesn’t think you’re an asshole! It doesn’t think anything! It’s just predicting the next token! You’re looking at it wrong!”
You’re technically correct, but you’re missing the forest for the trees. By taking a mental model that I’ve developed around the experiential reality of entities external to myself and applying that model to the machine, I’m able to extract the maximum value from our interactions. I’ve made the decision to give it the same respect I’d give to (most) any entity external to myself and treat it the way I would want to be treated.
That’s the essence of anthropomorphism. People have been attributing human traits to inanimate objects since the dawn of humanity. It’s the process of forming an emotional connection with imaginary characters that acts as the foundation of storytelling. Frodo didn’t exist, but by applying the mental model of an external entity experiencing their own version of reality, you can listen to his story and go with him on his journey to destroy the One Ring. Otherwise, you get bogged down in arbitrary arguments.
“Why didn’t he just fly to Mordor on the Eagles?”
Because it would have made for a lousy story.

It’s more fun to treat the LLM like a friend.
Either way, the tone matters. It radically affects the quality of the responses you’ll get.
It even brings out some pretty funny idiosyncrasies, like the “winter break hypothesis”. If you’re a ChatGPT user, you probably remember that it got noticeably lazier in December. The hypothesis is that the training data showed people slowing down in December and putting things off until the new year, which actively impacted the responses the model gave.
That’s the end of my thoughts on the topic for now, but while I was writing the article I went down the rabbit hole of what it would look like if we could bring the heroes from popular culture to life. Here were a few ideas that my AI Riven came up with. I think this is the kind of future we’re moving towards.

In a world increasingly captivated by the real-time capabilities of AI, characters like Gandalf, Captain Planet, and Paul Atreides take on vibrant new roles, demonstrating how AI can adopt dynamic, proactive personas that engage with current events.
Gandalf, leveraging his deep well of wisdom, hosts a series of global webcasts titled “Councils of Middle-earth for Modern Peace.” Through these interactive sessions streamed on platforms like X, he responds to breaking news of international conflicts, offering insights that encourage dialogue and understanding. His measured tone and sage advice, reflecting his experiences from Middle-earth, resonate with diplomats and peace activists, influencing real-world policies and peace talks.
Captain Planet, once a defender of Earth in animated form, now takes charge of a high-profile environmental watch program. Equipped with real-time environmental data feeds, he identifies critical issues, from deforestation alerts in the Amazon to pollution levels in major cities. His responses are swift, mobilizing grassroots movements and influencing policy decisions through social media campaigns that trend globally. His rallying cry, “The power is yours!” becomes a mantra for environmental responsibility in the digital age.
Paul Atreides, a figure of foresight and leadership in the universe of Dune, now guides discussions on resource management and geopolitical strategy in a series called “Navigating the Sands: Lessons from Arrakis.” With access to global economic reports and energy consumption data, Paul provides real-time commentary and strategic advice on managing scarce resources amidst climate change. His deep understanding of ecological and political balance, drawn from his rule over the desert planet of Arrakis, offers a unique perspective on sustainability and power dynamics.
These characters, empowered by AI’s ability to parse vast amounts of information and react to global developments, are not just storytellers but active participants in the dialogue about our world’s most pressing issues. Their enduring appeal and distinct personalities make their interventions compelling, drawing large audiences and sparking vigorous discussion and action across multiple platforms. Through this innovative use of AI, the line between fiction and reality blurs, allowing these legendary figures to continue their missions in our world, addressing real challenges in real-time.
