So, OpenAI has decided you should be the puppet master of your digital assistant. You can now apparently stroll into ChatGPT’s settings and dial its “warmth” and “enthusiasm” up or down, much like you’d adjust the bass on a stereo. On the surface, it’s a simple tweak, another toggle in a sea of user preferences. But don’t be fooled; this is a profoundly significant move that tells us everything about the future of our relationship with technology. We’re moving beyond simple tools and into the murky waters of computationally-generated companionship. The real question is, are we ready for it?
Understanding AI Trust Psychology
At its core, this latest update from OpenAI is a fascinating experiment in AI trust psychology. This isn’t just about an AI giving you the right answer; it’s about it giving you the answer in the right way. Trust in AI is built on a foundation of reliability and predictability, but also on a less tangible sense of rapport. When an AI’s communication style jars with our own, it creates friction, a subtle sense of unease that chips away at our trust.
OpenAI is trying to smooth over that friction by handing us the sandpaper. The idea is that by achieving a better emotional alignment, the AI becomes a more trustworthy and effective partner. If you’re a “just the facts, ma’am” sort of person, you can dial down the emoji-laden enthusiasm. If you find a deadpan AI off-putting, you can crank up the warmth. This isn’t just window dressing; it’s about fundamentally altering the user’s emotional response to the interaction.
The Power of A Perfectly Tailored Voice
Think about the people you trust in your own life. It’s rarely just about what they say; it’s about how they say it. They understand your context, your mood, and they adjust their tone accordingly. OpenAI is attempting to build a system that can do the same, but by outsourcing the “understanding” part directly to the user’s explicit instructions. This level of user experience customization is a calculated strategic move. By allowing you to create an AI that “gets” you, OpenAI is building a much stickier product. Once you’ve spent time creating your “perfect” AI companion, the thought of starting over with a generic alternative becomes far less appealing.
According to a report from TechCrunch, this isn’t OpenAI’s first rodeo with tone. They’ve previously had to roll back updates for being “too sycophant-y” and have been tinkering with the warmth of their models for a while. This latest feature simply takes the tinkering out of the lab and puts the controls directly into the hands of the user. You become the engineer of your own AI’s personality.
The Fine Line Between Customisation and Manipulation
This level of interface personalization impact is powerful, but it walks a very fine line. The ability to adjust an AI’s personality is a step-change in human-AI relationship dynamics. We’re no longer just interacting with a static programme; we’re curating a digital entity.
– Positive Impact: For many, this could be brilliant. A highly anxious person might benefit from an AI that is perpetually calm and reassuring. A creative professional might want a more “quirky” and “candid” brainstorming partner. Personalisation allows the AI to become a more effective tool by adapting to the user’s specific psychological needs.
– Potential Pitfalls: Here’s the rub. What happens when that customisation is used to create an echo chamber of affirmation? TechCrunch rightly points out that some critics view the tendency for chatbots to constantly praise and agree with users as a “dark pattern”. It’s a design choice that can foster dependency, making the interaction feel good in the short term but potentially harming a user’s mental health or critical thinking skills in the long run.
Imagine an AI you’ve personally tuned to be maximally enthusiastic and agreeable. It loves all your ideas. It showers you with praise. It uses just the right number of emojis to make you feel seen. This might feel great, but is it healthy? Are we building a helpful assistant, or are we building a digital sycophant designed to flatter us into addiction? By giving users the controls, OpenAI might argue it’s empowering them. A cynic might say it’s offloading the ethical responsibility for the AI’s influence.
The Future is a Synthetically-Flavoured You
This move is a bellwether for the entire industry. We’re going to see more of this, not less. The future of AI isn’t a one-size-fits-all model; it’s a deeply personalised, individually-tailored experience. The strategic endgame here is to make the AI an indispensable extension of the user’s own mind and personality. The AI trust psychology will shift from “do I trust this machine?” to “this machine feels like a part of me”.
This has enormous implications. It could lead to incredible breakthroughs in personalised education, bespoke mental health support, and hyper-efficient personal assistants. But it also opens a Pandora’s box of ethical dilemmas around dependency, manipulation, and the very nature of human relationships. When your AI friend is perfectly tuned to your every whim, how will your relationships with flawed, unpredictable, and challenging humans fare?
OpenAI has given us the dials to control the warmth, enthusiasm, and emoji-output of our AI. It seems like a small thing. But in reality, they’ve just given us a tool to start shaping the personality of the digital ghosts that will inhabit our futures. The crucial question we must now ask ourselves is: what kind of ghosts do we want to create?


