Answers & Questions…

After doing research about the ins-and-outs, pros-and-cons, I decided to have a ChatGPT experience.

My ChatGPT experience…

I started with three questions.

  • I am a human. How can I best relate to you?
  • I’m wondering if you are trained in human values.
  • From what you’ve learned in your training, what is your opinion of humans? 

As expected, the answer to the third question was essentially that ChatGPT has no ability to form subjective opinions on any topic. So I rephrased the question:

  • What are the prominent values that humans express in their interactions? 

With caveats about variations across individuals, cultures and societies, a list came back with 6 points, each elaborated by a couple sentences: respect; empathy and compassion; honesty and integrity; fairness and justice; cooperation and collaboration; freedom and autonomy.

This didn’t address the nuance of my question, which was values that humans express in their interactions. So I replied:

  • This sounds aspirational, but I question that these values are exhibited in practice. Would you please give me a more realistic answer, based on observation of human behaviour.

With caveats about human behaviour being complex and multifaceted, another list came back, again with two-sentence descriptions: self-interest; cultural and societal influences; conflict and disagreement; bias and discrimination; altruism and kindness.

If you’re curious, you can read the full chat by clicking the icon below…

Limitations…

The free version, GPT 3.5, is not connected to the internet. All answers come from the databank of information it was trained on, so its knowledge does not extend past 2021. The paid version, GPT 4, is more current.

A recent article in Forbes highlights The Top 10 Limitations of ChatGPT. Here’s the first one:

Lack of common sense: While ChatGPT can generate human-like responses and has access to a large amount of information, it does not possess human-level common sense — and the model also lacks the background knowledge we have. This means that ChatGPT may sometimes provide nonsensical or inaccurate responses to certain questions or situations.

This means that you, the human, must practise discernment in assessing any answer you get from your chatbot.  Does it make sense? Is it accurate?

At the moment, ChatGPT invents information with great facility when it doesn’t know an answer. In one case, it produced a list of 5 books that supported the topic under consideration. When the human fact-checked the list, not even one of those books existed.

The industry refers to this as hallucination. The current rate of hallucination is said to be 15 -20%, although it’s anticipated that this situation will improve over time. (Perhaps someone will teach ChatGPT about the human value known as honesty?)

Keep your wits about you…

While researching AI, I came across a compelling quote that reminds us not to blindly accept answers.These days, questioning answers seems crucial since a chatbot can provide an authoritative-sounding answer in 3 seconds. (No kidding, to each of my questions, the response started within 3 seconds and printed rapidly, without a pause, to completion.)

Implications for safety…

Yuval Noah Harari is an Israeli public intellectual, historian and professor in the Department of History at the Hebrew University of Jerusalem. His writings examine free will, consciousness, intelligence, happiness, and suffering. Here is a video of his interview about Safe and Responsible AI?

Key points from the interview…

AI has developed much faster than expected by the experts. Three things everyone should know about AI:

  1. It is the first tool in human history that can make decisions by itself. That is nothing like any previous inventions.
  2. It’s the first tool in human history that can create new ideas itself. Previous  tools could only disseminate our ideas.
  3. Humans are not very good at using new tools. We often make mistakes. It takes time for us to learn to use them in a beneficial and wise way. We know this by looking at examples of what happened during and after the Industrial Revolution. The crucial thing is that while we are learning to use AI, it is learning to use us. So we have less time and margin for error than with any previous invention.

The rest of the interview discusses implications and regulation of AI as it spreads in use. Good to be aware of.

My second ChatGPT experience…

I started another line of inquiry with my friendly chatbot, which had kindly concluded our first session with “If you have any more questions in the future, feel free to ask. Have a great day!

My question was…

  • How would you interpret this quote from Yuval Noah Harari: Questions you cannot answer are usually far better for you than answers you cannot question.

As I read the response, I noticed that Harari was referred to in the answer. I wondered what the reply would be if I did not include the source of the quote. So I removed his name and asked the revised question.

Comparing the two replies, I saw that the overall organization was identical. Concepts in each paragraph were the same, with slight variation in arrangement of words. Click the icon below if you’d like to see what ChatGPT said about the quote… and maybe reflect on whether or not that would be your interpretation.

And one more chat session…

Having recently written about discernment, I asked ChatGPT to write a song on the topic. The result came back in about 10 seconds, with 3 verses, a chorus, a bridge, and an outro. When I asked for a shorter song that did not lose the essence, the request was met.

The lyrics consist of appropriate buzz words about discernment, strung together with some sense of cadence and rhyme. Hokey is the descriptor that immediately came to mind in terms of both structure and content. This is a demonstration of another of the weaknesses identified in the previously mentioned Forbes article.

Lack of emotional intelligence: While ChatGPT can generate responses that seem empathetic, it does not possess true emotional intelligence. It cannot detect subtle emotional cues or respond appropriately to complex emotional situations.

When I asked for the music, I was advised that ChatGPT does not do audio. This was followed by loads of advice about musical style, genre, and arrangement.

Below is the full chat. The last phrase of the first verse doesn’t make sense to me; it doesn’t fit the context—an example of AI’s lack of common sense.

Good resources are important when navigating…

When we’re heading into new territory, as I was here, good resources pave the way. I did a lot of reading before opening a chat account. Here are two resources that I found particularly helpful…

And now I’m wondering…

What struck you about any or all of this?

If you found this of value, please share it with someone.

Thinking Differently & Why It Matters

Before I start, let me be clear that this post is not about artificial intelligence (AI). It’s about navigating life as AI becomes smarter than us.

The ChatGPT backstory…

Last week I mentioned a recent development that has disrupted our comfortable lives—the release of ChatGPT (Chat Generative Pre-Trained Transformer). To elaborate:

ChatGPT is a large language model designed to produce natural human language. Much like having a conversation with someone, you can talk to ChatGPT, and it will remember things you have said in the past while also being capable of correcting itself when challenged. … ChatGPT was trained using a mix of machine learning and human intervention, using a method called reinforcement learning from human feedback (RLHF). Take note of this last point because it is relevant to the proposal in the video that follows.

After being in the works for several years, ChatGPT was released on the internet by OpenAI in November 2022. Now, the summer of 2023, it’s creating a buzz everywhere.

Initial public concerns were about the ease of fraudulent communications—written, visual, and voice. This could show up in activities such as impersonating others, students submitting essays they didn’t write or even fact-check, and misleading news reports. Granted, these things already happen with our present technology, but the sophistication of AI makes it almost instantaneous, very convincing, readily accessible, and highly pervasive.

These characteristics of AI have raised concerns such as: How will we know what is real any more? What is real anyway? Does it matter? What will happen to jobs? Especially my job?

Then in March 2023, the Future of Life Institute published an open letter calling for “...all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” According to a CityyNews article, it was signed by “more than 1,000 researchers and technologists, including Elon Musk.

That letter was followed by several highly publicized cautions from early developers such as Prof Yoshua Bengio and Dr. Geoffrey Hinton. These and many other voices of industry insiders have raised the really big questions: Are we past the point of no return? (Yes) How long will it take for AI to be smarter than humans? (5 years, maybe less)

It’s scary…

Whereas the experts are deeply concerned because of what they know from working in the field, the rest of us experience the disempowerment of fear without knowledge. That makes fear doubly scary in any situation.

The logic of fear…

The logic of fear is that a moment in the future is less safe for me than this moment.

Therefore, it’s safer for me to preserve the conditions of this moment, to keep everything the same, to hold the line against change.

The fallacy…

The fallacy of this viewpoint is that it’s not the nature of nature to stay the same. We’ve all heard the truism that nothing is certain but change. So we are wasting a lot of energy when we fear the future and try to keep things the way they are.

A more constructive approach…

Given what experts are saying about the inevitability of AI’s rapid evolution, it’s unrealistic to think we can hold the line at the existing level of artificial intelligence (social media, chatbots as customer service contacts, GPS on our mobile phones, autocorrect on text messages, digital assistants, and e-payments).

Rather than trying to halt it, our energy will be much better spent in thinking about how we can navigate life with AI to achieve the best outcome for us, the planet, and humanity.

Sounds good in theory, but it seems huge beyond anything we can do. How will we ever be able to navigate what’s next?

The value of thinking differently…

When we don’t know what to do, the best thing is to change our perspective, to look at the situation from a different viewpoint. As Einstein famously said, we can’t solve a problem from the same level of consciousness that created it.

Mo Gawdat is approaching the AI situation from a higher consciousness. He’s a former industry insider who is thinking differently about how we can navigate life as AI becomes smarter than us.

It’s a radically different viewpoint, but who’s to say it wouldn’t work? As noted at the beginning of this post, ChatGPT was trained using a mix of machine learning and human intervention, using a method called reinforcement learning from human feedback (RLHF). As explained on the OpenAI website, this is “a method that uses human demonstrations and preference comparisons to guide the model toward desired behavior.” This suggests to me that we can play a constructive role here.

For a description of the Prisoner’s Dilemma, a concept in game theory in which everyone has an incentive to defect in their own self-interst and, as a result, everyone is worse off.

To hear Mo Gawdat speak at greater depth about this idea, go to 1:50 of his interview with Tom Bilyeu. For more context, start earier, at 1:18.

Moving forward…

In times when every “next thing” presents another big challenge that seems beyond our control, we will navigate more easily if we consciously avoid a knee-jerk fear reaction and instead put our attention on discovering empowering possibilities for action. 

As Mo Gawdat says, “We might as well…ask them [AI] to do what’s good for us, good for the planet, and good for humanity.” 

From my perspective, our human superpower is that we can choose to think differently—and thus create the more beautiful world we know in our hearts is possible.

If you found this of value, please share it with someone.

What Next?!

I’ve been contemplating it a lot lately—What next for me? What next in the world at large? And, I know I’m not alone in wondering what will present itself and how I’ll navigate whatever appears. 

I doubt that anyone has been immune to discombobulation as we’ve been confronted by one unexpected and unthinkable event after another. A confounding US presidential election result, the rapid arrival of a global pandemic, a war in Europe that has gone on for well over a year—just a few of the events of enormous magnitude that turned our world upside down.

And now, just as we thought we’d found our feet again, we’re dealing with yet another—the general accessibility of an artificial intelligence with capabilities that have stunned even people in the industry. These events, along with numerous others, have greatly disrupted our comfortable mindset about how life works.

Shifting perspective…

Most of us would prefer to avoid disruption, but it can be a good thing. When life turns upside down, we get a chance to see things differently… if we choose to.

I remember the story that first shifted my thinking about good and bad fortune. Here’s a charming version narrated by Alan Watts. Watts, who died in 1973, was an early interpreter and popularizer of Eastern philosophy for a Western audience. This is his telling of the story about a Chinese farmer, his horse, and his son…

Maybe…

Going back to the disruptive events we are experiencing, perhaps we can learn something from this story. What looks like a bad thing might turn out to be a good thing in the long run.

For example, the AI that has recently got our attention, known as ChatGPT, is evoking a lot of fear—about loss of jobs for humans, its power to impersonate humans, the rate at which it is evolving…

Those are legitimate concerns. But, on the other hand, perhaps the disconcerting  appearance of ChatGPT is actually serving a useful purpose.

What if the potential for ChatGPT to run amok prompts us to look deeper within to see what we value and what makes us human?

What if awareness of what is important and what makes us human prompts us to take responsibility for our own actions and to conduct our lives in accordance with that awareness of what we value as humans?

And what if, instead of worrying that AI is going to take us over, we teach it our values, just as parents do with their developing children?

Choose to see things differently

Is AI a bad thing?

Maybe.

Is AI a good thing?

Maybe.

How can we make it a good thing? That is the key question.

If you found this of value, please share it with someone.