Before I start, let me be clear that this post is not about artificial intelligence (AI). It’s about navigating life as AI becomes smarter than us.
The ChatGPT backstory…
Last week I mentioned a recent development that has disrupted our comfortable lives—the release of ChatGPT (Chat Generative Pre-Trained Transformer). To elaborate:
ChatGPT is a large language model designed to produce natural human language. Much like having a conversation with someone, you can talk to ChatGPT, and it will remember things you have said in the past while also being capable of correcting itself when challenged. … ChatGPT was trained using a mix of machine learning and human intervention, using a method called reinforcement learning from human feedback (RLHF). Take note of this last point because it is relevant to the proposal in the video that follows.
After being in the works for several years, ChatGPT was released on the internet by OpenAI in November 2022. Now, the summer of 2023, it’s creating a buzz everywhere.
Initial public concerns were about the ease of fraudulent communications—written, visual, and voice. This could show up in activities such as impersonating others, students submitting essays they didn’t write or even fact-check, and misleading news reports. Granted, these things already happen with our present technology, but the sophistication of AI makes it almost instantaneous, very convincing, readily accessible, and highly pervasive.
These characteristics of AI have raised concerns such as: How will we know what is real any more? What is real anyway? Does it matter? What will happen to jobs? Especially my job?
Then in March 2023, the Future of Life Institute published an open letter calling for “...all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” According to a CityyNews article, it was signed by “more than 1,000 researchers and technologists, including Elon Musk.“
That letter was followed by several highly publicized cautions from early developers such as Prof Yoshua Bengio and Dr. Geoffrey Hinton. These and many other voices of industry insiders have raised the really big questions: Are we past the point of no return? (Yes) How long will it take for AI to be smarter than humans? (5 years, maybe less)
Whereas the experts are deeply concerned because of what they know from working in the field, the rest of us experience the disempowerment of fear without knowledge. That makes fear doubly scary in any situation.
The logic of fear…
The logic of fear is that a moment in the future is less safe for me than this moment.
Therefore, it’s safer for me to preserve the conditions of this moment, to keep everything the same, to hold the line against change.
The fallacy of this viewpoint is that it’s not the nature of nature to stay the same. We’ve all heard the truism that nothing is certain but change. So we are wasting a lot of energy when we fear the future and try to keep things the way they are.
A more constructive approach…
Given what experts are saying about the inevitability of AI’s rapid evolution, it’s unrealistic to think we can hold the line at the existing level of artificial intelligence (social media, chatbots as customer service contacts, GPS on our mobile phones, autocorrect on text messages, digital assistants, and e-payments).
Rather than trying to halt it, our energy will be much better spent in thinking about how we can navigate life with AI to achieve the best outcome for us, the planet, and humanity.
Sounds good in theory, but it seems huge beyond anything we can do. How will we ever be able to navigate what’s next?
The value of thinking differently…
When we don’t know what to do, the best thing is to change our perspective, to look at the situation from a different viewpoint. As Einstein famously said, we can’t solve a problem from the same level of consciousness that created it.
Mo Gawdat is approaching the AI situation from a higher consciousness. He’s a former industry insider who is thinking differently about how we can navigate life as AI becomes smarter than us.
It’s a radically different viewpoint, but who’s to say it wouldn’t work? As noted at the beginning of this post, ChatGPT was trained using a mix of machine learning and human intervention, using a method called reinforcement learning from human feedback (RLHF). As explained on the OpenAI website, this is “a method that uses human demonstrations and preference comparisons to guide the model toward desired behavior.” This suggests to me that we can play a constructive role here.
For a description of the Prisoner’s Dilemma, a concept in game theory in which everyone has an incentive to defect in their own self-interst and, as a result, everyone is worse off.
To hear Mo Gawdat speak at greater depth about this idea, go to 1:50 of his interview with Tom Bilyeu. For more context, start earier, at 1:18.
In times when every “next thing” presents another big challenge that seems beyond our control, we will navigate more easily if we consciously avoid a knee-jerk fear reaction and instead put our attention on discovering empowering possibilities for action.
As Mo Gawdat says, “We might as well…ask them [AI] to do what’s good for us, good for the planet, and good for humanity.”
From my perspective, our human superpower is that we can choose to think differently—and thus create the more beautiful world we know in our hearts is possible.
If you found this of value, please share it with someone.