What Next?! Recap & Deeper Dives

From a cosmic perspective, humanity currently has an opportunity to rise to a higher state of consciousness. What we are experiencing at this time is the instability and chaos that precedes such an enormous shift. My previous five posts have been about navigating life through the challenges of 2023 and beyond.

Throughout this group of posts, artificial intelligence (AI) served as an example of the next big challenge that is upon us. Its imminence makes us aware of our need for good navigation skills as we make our way through the future we’re headed into.

I’m currently working on the next group—the theme is Why Not?!

Until those posts are complete, I’m leaving you with a summary of What Next?! along with opportunties for further exploration if you’re so inclined.

Recapitulation…

  1. What Next?!  Video: The Chinese Farmer -instead of seeing things as good and bad, he holds all occurrences lightly, without judging them.
  2. Thinking Differently & Why It Matters  Video: Mo Gawdat -about avoiding disaster by teaching AI human values.
  3. Holding Your Centre & How You Can  Video: Ashana -playing healing crystal bowls that reinforce our deepest inner connection.
  4. Practising Discernment & Why It Helps  These days we may feel unprepared for the problems we must solve. Discernment helps us navigate with confidence and step up to do the right thing.
  5. Answers & Questions  Video referenced: Yuval Noah Harari –Safe and Responsible AI? Learn from my experience questioning ChatGPT about human values, a quote about questions and answers, and writing a song about discernment. Results are shared in full for anyone who is curious.

DIVING DEEPER…

I’ve selected 2 interviews for further exploration. For this purpose, I’m interested in the thinking processes as much as the content.

Deep Dive #1 – Thinking differently & Practising discernment

The book under discussion in the video that follows is Best Things First. The author, Bjørn Lomborg, concerns himself with global issues that go well beyond climate change.

Lomborg’s starting point is: Panic is not the mode to be in if you want to solve issues. When it comes to global warming, it’s a problem but it’s not the end of the world. Therefore, we have time to enact the “bang-for-your-buck” concept, finding what works best in global issues related to health, hunger, and education…then applying ourselves (and our money) to rapidly improving those things.

Bjørn Lomborg is a globally recognized author and thought-leader renowned for his innovative perspectives on addressing global issues. HIs mission is to help people discover the most effective solutions to the world’s greatest challenges, from disease and hunger to climate and education.

Tom Bilyeu, the interviewer, is a podcaster and entrepreneur. He emphasizes that we can’t already know how to solve global problems that we’ve never encountered before. He urges us to learn how to think through novel problems, building a rubric through which we can approach them. Essentially he’s referring to developing a list of specific criteria to evaluate items under consideration and determine which possibilities meet the criteria.

He sets the framework…

  1. Start with your North Star, your guiding principle. Lomborg identified people, planet, prosperity.
  2. Use benefit/cost analysis to prioritize. In other words, find what works best (greatest benefit for least cost) and pick that.
  3. Do those best things in each area of concern first.

This interview is a good opportunity to observe two people demonstrating how they think as they explore Lomborg’s findings. You might be surprised at what ended up on his list of 12 things to do first.

When people are working through solutions to difficult problems, they are usually thinking differently from what we’re comfortable with. It’s up to each of us to discern how Lomborg’s recommendations sit with us and ask questions when we’re not satisfied. What are the gaps? Is the premise sound?

In other words, it’s a good chance to practise discernment as you listen to the conversation.

 Watch the video… Do These 12 THINGS First If You Want a BRIGHT FUTURE  July 25, 2023

 

Deep Dive #2 – Looking at an issue in the wider cultural context

If you ever think about things like the economic system and how it drives most of what happens in our lives, you will appreciate the breadth and depth of Liv Boeree’s conversation (video ink below) with Daniel Schmachtenberger.

In his introductory comments, Schmachtenberger states his intention: To identify  AI risk scenarios and a way of thinking about the entire risk landscape that is different from the usual way of talking about it… and to provide insight into what might be required to protect against those risks.

Daniel Schmachtenberger is a social philosopher and founding member of The Consilience Project, aimed at improving public sense-making and dialogue. He has a particular interest in the topics of catastrophic and existential risk, as well as civilization and institutional decay and collapse. In her written description of the interview, interviewer Liv Boeree cautions

Not a conversation for the faint-hearted, but crucial nonetheless. This is a deep dive into the game theory and exponential growth underlying our modern economic system, and how recent advancements in AI are poised to turn up the pressure on that system, and its wider environment, in ways we have never seen before.

It would help to understand these terms…

Moloch: Moloch has appeared in literature in a variety of forms. The Canaanite god Moloch was the recipient of child sacrifice according to the account of the Hebrew Bible. Moloch is depicted in John Milton’s epic poem Paradise Lost as one of the greatest warriors of the rebel angels, vengeful and militant.

In the 19th century, “Moloch” came to be used allegorically for any idol or cause requiring excessive sacrifice. Bertrand Russell in 1903 used Moloch to describe oppressive religion, and Winston Churchill in his 1948 history The Gathering Storm used “Moloch” as a metaphor for Adolf Hitler‘s cult of personality.

In modern usage it denotes a tyrannical power, such as “the great Moloch of war” or “duty has become the Moloch of modern life.” Liv Boeree, the interviewer and an expert in game theory, defines Moloch as the God of unhealthy competition.

Meta-crisis: The meta-crisis is an entangled series of crises—ecological, psychological, spiritual, cultural, governmental, and economic. The meta-crisis is all of these and not reducible to any one of them alone. AI is not one of the risks embedded within the meta-crisis; it is an accelerant of all of them.

The meta-crisis is a self-accelerating phenomenon that grows more and more complex each day. For example, ChatGPT was version 3.5 when it was launched on the internet a few months ago. Since then, version 4 has been made available. Although ChatGPT4 has access to current information (unlike 3.5 which was limited to pre-2021) version 4 is still only programmed to do certain kinds of things.

The next step is AGI (Artificial General Intelligence), which will be fully autonomous and therefore immune to any human efforts to pull the plug. It will be able to set its own goals, independent from ours, and then take steps to implement actions toward those goals. It won’t matter if we like their goals or not. The concern of Schmachtenberger, along with many others, is that AGI will be intelligence unbound by wisdom (more below).

Compounding the meta crisis is technology—technology that makes us more distracted, divided, and confused, thereby reducing our ability to act wisely. And yet, paradoxically, this same technology gives us god-like powers which increase the need to act wisely. A very good talk: Confronting The Meta-Crisis: Criteria for Turning The Titanic – Terry Patten speaking at Google

The Alignment Problem: Misalignment is a challenging, wide-ranging problem to which there is currently no known solution. As AI systems get more powerful, they don’t necessarily get better at dooing what humans want them to.

For example, large language models such as OpenAI’s GPT-3 and Google’s Lamda get more powerful as they scale. When they get more powerful, they exhibit novel, unpredictable capabilities—a characteristic called emergence. Alignment seeks to ensure that, as these new capabilities emerge, they continue to align with the human goals the AI system was designed to achieve.

The problem comes from a misalignment of intelligence and wisdom. Any system can be misaligned, even one that is highly intelligent, if the wisdom piece is missing. Think back to Mo Gawdat and his idea about teaching human values to our AI. That solution is aimed at addressing the alignment problem by teaching wisdom to our AI.

Intelligence and wisdom…

At this point, it is worth interjecting Schmachtenberger’s discussion of intelligence and wisdom  in another interview (starting at 2:46:25). From his deep-and-wider context, here are the key points:

  • It is fair to say that human intelligence, unbound by wisdom, is the cause of the meta-crisis.
  • This same intelligence has created all the technologies—the agricultural, industrial, digital, nuclear weapons, energy harvesting…
  • It also made the system of capitalism, of communism, of…
  • This type of intelligence takes our physical (corporeal) capacities and extends them considerably—in the way a fist is extended through a hammer, or an eye is extended through a microscope or telescope (extra-corporeal).
  • And now, the type of intelligence that does this “is having the extra-corporeal intelligence be that type of intelligence itself—in maximum recursion, not bound by wisdom, driven by international, multipolar, military traps and markets.”
  • The narrow optimization it fosters is very dangerous.
  • This system is structured to perpetuate narrow short-term goals at the expense of long-term wide values. The question is, what goals are worthy of optimization?
  • What we need is systems of collective intelligence and wisdom that are based on the thriving of life in all perpetuity. Nothing less will be effective.
  • Intelligence has to be bound by wisdom.
  • Wisdom requires more than just being able to attune to the known metrics, and more than just the optimization and logic processes of those metrics.
  • Wisdom will always be bound to restraint.
  • Wisdom is more possible at smaller scale, where people can be in richer relationships with each other,
  • Understanding the limits of our own models is wisdom. There are aways unknowns that models cannot account for.

Watch the interview… Misalignment, AI & Moloch  March 30, 2023

~ If you found this post of value, please share it with someone. ~

Answers & Questions…

After doing research about the ins-and-outs, pros-and-cons, I decided to have a ChatGPT experience.

My ChatGPT experience…

I started with three questions.

  • I am a human. How can I best relate to you?
  • I’m wondering if you are trained in human values.
  • From what you’ve learned in your training, what is your opinion of humans? 

As expected, the answer to the third question was essentially that ChatGPT has no ability to form subjective opinions on any topic. So I rephrased the question:

  • What are the prominent values that humans express in their interactions? 

With caveats about variations across individuals, cultures and societies, a list came back with 6 points, each elaborated by a couple sentences: respect; empathy and compassion; honesty and integrity; fairness and justice; cooperation and collaboration; freedom and autonomy.

This didn’t address the nuance of my question, which was values that humans express in their interactions. So I replied:

  • This sounds aspirational, but I question that these values are exhibited in practice. Would you please give me a more realistic answer, based on observation of human behaviour.

With caveats about human behaviour being complex and multifaceted, another list came back, again with two-sentence descriptions: self-interest; cultural and societal influences; conflict and disagreement; bias and discrimination; altruism and kindness.

If you’re curious, you can read the full chat by clicking the icon below…

Limitations…

The free version, GPT 3.5, is not connected to the internet. All answers come from the databank of information it was trained on, so its knowledge does not extend past 2021. The paid version, GPT 4, is more current.

A recent article in Forbes highlights The Top 10 Limitations of ChatGPT. Here’s the first one:

Lack of common sense: While ChatGPT can generate human-like responses and has access to a large amount of information, it does not possess human-level common sense — and the model also lacks the background knowledge we have. This means that ChatGPT may sometimes provide nonsensical or inaccurate responses to certain questions or situations.

This means that you, the human, must practise discernment in assessing any answer you get from your chatbot.  Does it make sense? Is it accurate?

At the moment, ChatGPT invents information with great facility when it doesn’t know an answer. In one case, it produced a list of 5 books that supported the topic under consideration. When the human fact-checked the list, not even one of those books existed.

The industry refers to this as hallucination. The current rate of hallucination is said to be 15 -20%, although it’s anticipated that this situation will improve over time. (Perhaps someone will teach ChatGPT about the human value known as honesty?)

Keep your wits about you…

While researching AI, I came across a compelling quote that reminds us not to blindly accept answers.These days, questioning answers seems crucial since a chatbot can provide an authoritative-sounding answer in 3 seconds. (No kidding, to each of my questions, the response started within 3 seconds and printed rapidly, without a pause, to completion.)

Implications for safety…

Yuval Noah Harari is an Israeli public intellectual, historian and professor in the Department of History at the Hebrew University of Jerusalem. His writings examine free will, consciousness, intelligence, happiness, and suffering. Here is a video of his interview about Safe and Responsible AI?

Key points from the interview…

AI has developed much faster than expected by the experts. Three things everyone should know about AI:

  1. It is the first tool in human history that can make decisions by itself. That is nothing like any previous inventions.
  2. It’s the first tool in human history that can create new ideas itself. Previous  tools could only disseminate our ideas.
  3. Humans are not very good at using new tools. We often make mistakes. It takes time for us to learn to use them in a beneficial and wise way. We know this by looking at examples of what happened during and after the Industrial Revolution. The crucial thing is that while we are learning to use AI, it is learning to use us. So we have less time and margin for error than with any previous invention.

The rest of the interview discusses implications and regulation of AI as it spreads in use. Good to be aware of.

My second ChatGPT experience…

I started another line of inquiry with my friendly chatbot, which had kindly concluded our first session with “If you have any more questions in the future, feel free to ask. Have a great day!

My question was…

  • How would you interpret this quote from Yuval Noah Harari: Questions you cannot answer are usually far better for you than answers you cannot question.

As I read the response, I noticed that Harari was referred to in the answer. I wondered what the reply would be if I did not include the source of the quote. So I removed his name and asked the revised question.

Comparing the two replies, I saw that the overall organization was identical. Concepts in each paragraph were the same, with slight variation in arrangement of words. Click the icon below if you’d like to see what ChatGPT said about the quote… and maybe reflect on whether or not that would be your interpretation.

And one more chat session…

Having recently written about discernment, I asked ChatGPT to write a song on the topic. The result came back in about 10 seconds, with 3 verses, a chorus, a bridge, and an outro. When I asked for a shorter song that did not lose the essence, the request was met.

The lyrics consist of appropriate buzz words about discernment, strung together with some sense of cadence and rhyme. Hokey is the descriptor that immediately came to mind in terms of both structure and content. This is a demonstration of another of the weaknesses identified in the previously mentioned Forbes article.

Lack of emotional intelligence: While ChatGPT can generate responses that seem empathetic, it does not possess true emotional intelligence. It cannot detect subtle emotional cues or respond appropriately to complex emotional situations.

When I asked for the music, I was advised that ChatGPT does not do audio. This was followed by loads of advice about musical style, genre, and arrangement.

Below is the full chat. The last phrase of the first verse doesn’t make sense to me; it doesn’t fit the context—an example of AI’s lack of common sense.

Good resources are important when navigating…

When we’re heading into new territory, as I was here, good resources pave the way. I did a lot of reading before opening a chat account. Here are two resources that I found particularly helpful…

And now I’m wondering…

What struck you about any or all of this?

If you found this of value, please share it with someone.