Answers & Questions…

After doing research about the ins-and-outs, pros-and-cons, I decided to have a ChatGPT experience.

My ChatGPT experience…

I started with three questions.

  • I am a human. How can I best relate to you?
  • I’m wondering if you are trained in human values.
  • From what you’ve learned in your training, what is your opinion of humans? 

As expected, the answer to the third question was essentially that ChatGPT has no ability to form subjective opinions on any topic. So I rephrased the question:

  • What are the prominent values that humans express in their interactions? 

With caveats about variations across individuals, cultures and societies, a list came back with 6 points, each elaborated by a couple sentences: respect; empathy and compassion; honesty and integrity; fairness and justice; cooperation and collaboration; freedom and autonomy.

This didn’t address the nuance of my question, which was values that humans express in their interactions. So I replied:

  • This sounds aspirational, but I question that these values are exhibited in practice. Would you please give me a more realistic answer, based on observation of human behaviour.

With caveats about human behaviour being complex and multifaceted, another list came back, again with two-sentence descriptions: self-interest; cultural and societal influences; conflict and disagreement; bias and discrimination; altruism and kindness.

If you’re curious, you can read the full chat by clicking the icon below…

Limitations…

The free version, GPT 3.5, is not connected to the internet. All answers come from the databank of information it was trained on, so its knowledge does not extend past 2021. The paid version, GPT 4, is more current.

A recent article in Forbes highlights The Top 10 Limitations of ChatGPT. Here’s the first one:

Lack of common sense: While ChatGPT can generate human-like responses and has access to a large amount of information, it does not possess human-level common sense — and the model also lacks the background knowledge we have. This means that ChatGPT may sometimes provide nonsensical or inaccurate responses to certain questions or situations.

This means that you, the human, must practise discernment in assessing any answer you get from your chatbot.  Does it make sense? Is it accurate?

At the moment, ChatGPT invents information with great facility when it doesn’t know an answer. In one case, it produced a list of 5 books that supported the topic under consideration. When the human fact-checked the list, not even one of those books existed.

The industry refers to this as hallucination. The current rate of hallucination is said to be 15 -20%, although it’s anticipated that this situation will improve over time. (Perhaps someone will teach ChatGPT about the human value known as honesty?)

Keep your wits about you…

While researching AI, I came across a compelling quote that reminds us not to blindly accept answers.These days, questioning answers seems crucial since a chatbot can provide an authoritative-sounding answer in 3 seconds. (No kidding, to each of my questions, the response started within 3 seconds and printed rapidly, without a pause, to completion.)

Implications for safety…

Yuval Noah Harari is an Israeli public intellectual, historian and professor in the Department of History at the Hebrew University of Jerusalem. His writings examine free will, consciousness, intelligence, happiness, and suffering. Here is a video of his interview about Safe and Responsible AI?

Key points from the interview…

AI has developed much faster than expected by the experts. Three things everyone should know about AI:

  1. It is the first tool in human history that can make decisions by itself. That is nothing like any previous inventions.
  2. It’s the first tool in human history that can create new ideas itself. Previous  tools could only disseminate our ideas.
  3. Humans are not very good at using new tools. We often make mistakes. It takes time for us to learn to use them in a beneficial and wise way. We know this by looking at examples of what happened during and after the Industrial Revolution. The crucial thing is that while we are learning to use AI, it is learning to use us. So we have less time and margin for error than with any previous invention.

The rest of the interview discusses implications and regulation of AI as it spreads in use. Good to be aware of.

My second ChatGPT experience…

I started another line of inquiry with my friendly chatbot, which had kindly concluded our first session with “If you have any more questions in the future, feel free to ask. Have a great day!

My question was…

  • How would you interpret this quote from Yuval Noah Harari: Questions you cannot answer are usually far better for you than answers you cannot question.

As I read the response, I noticed that Harari was referred to in the answer. I wondered what the reply would be if I did not include the source of the quote. So I removed his name and asked the revised question.

Comparing the two replies, I saw that the overall organization was identical. Concepts in each paragraph were the same, with slight variation in arrangement of words. Click the icon below if you’d like to see what ChatGPT said about the quote… and maybe reflect on whether or not that would be your interpretation.

And one more chat session…

Having recently written about discernment, I asked ChatGPT to write a song on the topic. The result came back in about 10 seconds, with 3 verses, a chorus, a bridge, and an outro. When I asked for a shorter song that did not lose the essence, the request was met.

The lyrics consist of appropriate buzz words about discernment, strung together with some sense of cadence and rhyme. Hokey is the descriptor that immediately came to mind in terms of both structure and content. This is a demonstration of another of the weaknesses identified in the previously mentioned Forbes article.

Lack of emotional intelligence: While ChatGPT can generate responses that seem empathetic, it does not possess true emotional intelligence. It cannot detect subtle emotional cues or respond appropriately to complex emotional situations.

When I asked for the music, I was advised that ChatGPT does not do audio. This was followed by loads of advice about musical style, genre, and arrangement.

Below is the full chat. The last phrase of the first verse doesn’t make sense to me; it doesn’t fit the context—an example of AI’s lack of common sense.

Good resources are important when navigating…

When we’re heading into new territory, as I was here, good resources pave the way. I did a lot of reading before opening a chat account. Here are two resources that I found particularly helpful…

And now I’m wondering…

What struck you about any or all of this?

If you found this of value, please share it with someone.