By Tomas Chamorro-Premuzic for Forbes
Why chatGPT acts like an insecure narcissist, and why blaming AI for our biases is a perfect excuse for remaining unaware of them
Human beings have a long history of blaming technological innovations for their own cultural demise.
Socrates and his fellow philosophers (think of them as the original hipsters) were opposed to the invention of writing, on the grounds that it would atrophy memory.
When modern newspapers were invented, critics feared they would kill social gatherings: why meet, after all, if there are no news or gossip to exchange?
Most famously, for the past 100 years, we’ve been accusing all forms of mass media (from radio to TV, and video games to social media) of making us stupid.
So, it should come as no surprise that the overall sentiment towards AI is largely negative.
Indeed, AI has been blamed for:
- Destroying jobs, and producing a “useless class” of humans
- Creating a biased society
- Making us more antisocial
Let’s start with job destruction…
Although it is true that AI automation eliminates jobs, it mostly replaces tasks within jobs, changing the constellation of skills and behaviors needed to perform them, as opposed to displacing humans altogether. For example, just like the Uber app makes it less relevant for taxi drivers to know their way around a city, but more relevant for them to have clean cars, and an interesting conversation, chatGPT devaluates a wide range of technical skills, from coding to translating, and proof-reading to e-mail drafting, thereby increasing the value of tasks that are dependent on human judgment, creativity, and action. This should be a perfect excuse to re-imagine our jobs and devote less time to boring and predictable tasks, and more time to thinking and being creative, but there are no obvious signs that those activities are universally appealing to workers, even when they do have the necessary skills and expertise to perform them.
It is also noteworthy that, just like in previous technological revolutions, AI-related job elimination tends to occur at a far slower rate (and in lower frequency) than new job creation. The problem is that when people do lose their jobs, they cannot automatically access the many new jobs that are created by AI. For instance, the massive transition from mall to online shopping decreased demand for brick-and-mortar store managers, while increasing demand for digital marketers, cybersecurity analysts, and AI ethicists: but these jobs require a very different range of skills than that of a store manager; same goes for prompt engineer compared to machine learning engineer.
Moreover, just because humans are likely to remain employed doesn’t mean they will be more productive. The data on this are quite clear: between 2000-2008 (in the first phase of the digital revolution), productivity (doing more with less, or output divided by input) improved, but, with the advent of social media (2008 onwards) it clearly stagnated. In fact, AI-fueled platforms have largely turned into a weapon of mass distraction, such that any gains from technology are quickly “spent” or offset on social media. Consider that 60-85% of smartphone use occurs during working hours, and that 70% of workers report being distracted at work. In the US alone, the productivity loss of these technologies amounts to $650B per year: that’s 15 times higher than the cost of absenteeism, sickness, and wellbeing combined. Multitasking results in the equivalent deduction of 10 IQ points, which is as intellectually debilitating as smoking weed, but (presumably) less enjoyable. The average human is expected to spend over 20 years on their screens. Kids, parents, spoused, friends, and clients are all competing for our attention, and AI is beating them.
In the early weeks of the pandemic, when offices closed down and people were sent to work from home, a cynical client of mine remarked “but without the office, how will I pretend to work”. Although hybrid work is undoubtedly attractive – who doesn’t want more freedom and flexibility? – the paradoxical truth is that it is harder to be distracted in an office than at home, even if there are fewer incentives to “pretend” to work at home than at work. Indeed, the prospect of colleagues coming to interrupt you, or catching up on gossip, is probably a welcome distraction to our digital addictions. As I note in my latest book, I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique, “over-sensory stimulation leads to intellection under-stimulation”.
As for the bias problem, since AI is only as good as the training data it ingests, and most of these data are based on human preferences, judgments, and decisions, it cannot surprise us that AI isn’t all that objective when it comes to human affairs (conversely, identifying trees, traffic lights, or cupcakes is a relatively bias-free endeavor). Take chatGPT, which is a bit like the BBC, in that people on the left accuse it of being right, and people on the right accuse it of being woke. Perhaps this is the highest possible bar for human fairness and objectivity: to ensure that everybody is equally unhappy? By the same token, when guardrails are introduced to censor culturally sensitive, politically incorrect, or controversial content we mock AI for being lame and accuse its developers of cancel culture, which does obviously very little eradicate actual biases.
Likewise, horror stories of algorithms “breaking bad” or AI gone rogue are not so much an instance of introducing biases in human behavior, but rather exposing them. So, when a company uses AI in recruitment, to predict who gets promoted in a team or organization, and the algorithms recommend that middle aged white male engineers are hired over other demographic groups, the bias is not in the AI, but in the company: which is why if you refrain from using AI, middle aged white male engineers will continue to over-index in management roles. By the same token, when Google censors the horrific sexist comments displayed by the autocomplete algorithm in response to the search “women are…”, it does not eliminate sexism from society.
To be sure, it would be very easy to retrain any algorithm to feed or expose us to what we need but not necessarily want to hear or see – think of it as “open minded AI” or recommendation algorithms designed to make us more open minded, as opposed to what they mostly do today: turn us into a more exaggerated or radical version of ourselves. So, imagine an AI that gave you Spotify songs, Netflix movies, Amazon books, Twitter news, Tinder matches, and even job candidates, that don’t match your preferences but instead broaden your horizons, helping you rethink and make novel decisions. The result would be the total collapse of those platforms, followed by the mass migration of users to apps that go back to giving us more of what we want.
Why? Because humans love their biases, especially when they see themselves as open minded, which typically results in them hanging out with people who are exactly like them. This is why fact-checking algorithms rarely change voting intentions during presidential debates, and why the infamous AI profiling tool deployed by Cambridge Analytica estimated that no more than 15% of voters were deemed “persuadable” (85% are beyond persuading no matter how much you bombard them with fake news). In short, humans have been doing an excellent job at refining and perfecting their biases for millennia, without the help of AI, or indeed any other technology, though blaming AI for our own biases is an excellent way to remain in denial about them.
As for AI making us antisocial, well, there’s definitely some truth in that. After all, most of the algorithms that fuel the platforms that hijack our attention are designed to incentivize inappropriate self-disclosure, self-centered behaviors, and shameless self-promotion. That said, these platforms wouldn’t even exist if we hadn’t been quite narcissistic to begin with, but the AI age has normalized digital narcissism by making it rather adaptive. In the real world, if you go around the office telling everybody how great you are, broadcasting your unsolicited thoughts with your colleagues and bosses, unjustifiably pleased with yourself and unaware of your limitations, and telling everybody what your cat had for breakfast, you will be deemed pretty obnoxious. But in the digital world, the algorithms that turbocharge social media adoption will turn you into an influencer, depriving you from the much needed feedback you need to be aware of your limitations and develop some much needed restraint, humility, and social skills.
It is important to acknowledge that narcissism levels have been rising for at least 50 years, if not more, at a higher rate than obesity. This is why Paris Hilton looks incredibly low key by Kim Kardashian’s standards, and why the successor to Kanye will likely make the original Kanye seem humble and self-aware. Luckily for us, AI is not judgmental in the human sense. Otherwise it will surely wonder how a species so advanced and accomplished can be so needy, pathetically insecure, and desperately seeking validation and positive feedback from others. No wonder that chatGPT often acts like an insecure narcissists, more confident than competent, yet not utterly convinced of its capabilities, and craving for approval to perpetuate its delusional self-view: “bear in mind I’m just a large language model and my training data goes back only until 2021 but I will still pretend I know everything about what you are asking and hope you like me”. The stochastic parrot that has been described as “mansplaining as a service” is a close depiction of the average manager in the world. Thankfully for David Brent (or Michael Scott), chatGPT wasn’t available in 2001 or we wouldn’t have the Office as we know it.
So, while we shouldn’t blame AI for our own lack of focus, biases, or inflated egos, the main risk of the AI age is that it exacerbates some of our least desirable tendencies. In fact, the more AI resembles humans, the more it acquires human-like features, the more we seem to resist it: not just for the understandable defensive paranoia about our once unique human virtues, but also because when we see ourselves through the reflection or mirror of AI, we are often ashamed or embarrassed by what we see.
And yet, this also speaks to our main opportunity, which is to focus on upgrading ourselves in the face of ever more advanced technologies, and an undeniable dependance on machines. In particular, there are three things we should consider doing:
1) Harness the skills AI wont learn or automate: empathy, kindness, emotional intelligence, self-awareness, curiosity and creativity. Whenever AI critics point out that machines are not truly displaying any of these qualities, we should remember: neither are most humans. But we can and should, and doing so will give us a clear edge over machines, especially as more and more people crave human affection and validation in a world in which most of our transactions are made with machines.
2) Develop deep expertise: AI, especially generative AI and chatGPT, is redefining the meaning of expertise, which is no longer about knowing all the answers, but asking the right questions, and knowing enough about a specific subject to call on AI’s BS (or hallucinations). Think of generative AI as fast food: a quick, tasty, cheap, and rather addictive but not very nutritious fix for our hungry minds. It’s a real shame that “deep learning” is usually associated with machines rather than humans, for it is our deep desire to learn and dig deeper that will gives us the edge over machines. So, just like the fast food industry increased demand for Michelin star chefs and created the farm to table and slow food movements, we need the intellectual equivalent to slow food.
3) Revisit this semi-abandoned and desolate place, where we can perhaps find nuanced people who are interested in living life not just for the purpose of training AI, but actually enjoying life, including our real interactions with other humans: the analogue or real world. After all, this will remain the only place not accessed by AI, where our learnings and experiences are guaranteed to remain quintessentially human.
Finally, it would be useful to refrain from having double standards, where we demand perfection from AI, while being perfectly OK with far lower standards for human behavior. For example, one self-driving car crashes and we are horrified, but we are somewhat OK with 1.3M people dying each year, courtesy of human drivers. Let us remember that the goal for AI is not perfection, but better than the status-quo, which, in most instances, is a very low bar.