What Next?! Recap & Deeper Dives

From a cosmic perspective, humanity currently has an opportunity to rise to a higher state of consciousness. What we are experiencing at this time is the instability and chaos that precedes such an enormous shift. My previous five posts have been about navigating life through the challenges of 2023 and beyond.

Throughout this group of posts, artificial intelligence (AI) served as an example of the next big challenge that is upon us. Its imminence makes us aware of our need for good navigation skills as we make our way through the future we’re headed into.

I’m currently working on the next group—the theme is Why Not?!

Until those posts are complete, I’m leaving you with a summary of What Next?! along with opportunties for further exploration if you’re so inclined.

Recapitulation…

  1. What Next?!  Video: The Chinese Farmer -instead of seeing things as good and bad, he holds all occurrences lightly, without judging them.
  2. Thinking Differently & Why It Matters  Video: Mo Gawdat -about avoiding disaster by teaching AI human values.
  3. Holding Your Centre & How You Can  Video: Ashana -playing healing crystal bowls that reinforce our deepest inner connection.
  4. Practising Discernment & Why It Helps  These days we may feel unprepared for the problems we must solve. Discernment helps us navigate with confidence and step up to do the right thing.
  5. Answers & Questions  Video referenced: Yuval Noah Harari –Safe and Responsible AI? Learn from my experience questioning ChatGPT about human values, a quote about questions and answers, and writing a song about discernment. Results are shared in full for anyone who is curious.

DIVING DEEPER…

I’ve selected 2 interviews for further exploration. For this purpose, I’m interested in the thinking processes as much as the content.

Deep Dive #1 – Thinking differently & Practising discernment

The book under discussion in the video that follows is Best Things First. The author, Bjørn Lomborg, concerns himself with global issues that go well beyond climate change.

Lomborg’s starting point is: Panic is not the mode to be in if you want to solve issues. When it comes to global warming, it’s a problem but it’s not the end of the world. Therefore, we have time to enact the “bang-for-your-buck” concept, finding what works best in global issues related to health, hunger, and education…then applying ourselves (and our money) to rapidly improving those things.

Bjørn Lomborg is a globally recognized author and thought-leader renowned for his innovative perspectives on addressing global issues. HIs mission is to help people discover the most effective solutions to the world’s greatest challenges, from disease and hunger to climate and education.

Tom Bilyeu, the interviewer, is a podcaster and entrepreneur. He emphasizes that we can’t already know how to solve global problems that we’ve never encountered before. He urges us to learn how to think through novel problems, building a rubric through which we can approach them. Essentially he’s referring to developing a list of specific criteria to evaluate items under consideration and determine which possibilities meet the criteria.

He sets the framework…

  1. Start with your North Star, your guiding principle. Lomborg identified people, planet, prosperity.
  2. Use benefit/cost analysis to prioritize. In other words, find what works best (greatest benefit for least cost) and pick that.
  3. Do those best things in each area of concern first.

This interview is a good opportunity to observe two people demonstrating how they think as they explore Lomborg’s findings. You might be surprised at what ended up on his list of 12 things to do first.

When people are working through solutions to difficult problems, they are usually thinking differently from what we’re comfortable with. It’s up to each of us to discern how Lomborg’s recommendations sit with us and ask questions when we’re not satisfied. What are the gaps? Is the premise sound?

In other words, it’s a good chance to practise discernment as you listen to the conversation.

 Watch the video… Do These 12 THINGS First If You Want a BRIGHT FUTURE  July 25, 2023

 

Deep Dive #2 – Looking at an issue in the wider cultural context

If you ever think about things like the economic system and how it drives most of what happens in our lives, you will appreciate the breadth and depth of Liv Boeree’s conversation (video ink below) with Daniel Schmachtenberger.

In his introductory comments, Schmachtenberger states his intention: To identify  AI risk scenarios and a way of thinking about the entire risk landscape that is different from the usual way of talking about it… and to provide insight into what might be required to protect against those risks.

Daniel Schmachtenberger is a social philosopher and founding member of The Consilience Project, aimed at improving public sense-making and dialogue. He has a particular interest in the topics of catastrophic and existential risk, as well as civilization and institutional decay and collapse. In her written description of the interview, interviewer Liv Boeree cautions

Not a conversation for the faint-hearted, but crucial nonetheless. This is a deep dive into the game theory and exponential growth underlying our modern economic system, and how recent advancements in AI are poised to turn up the pressure on that system, and its wider environment, in ways we have never seen before.

It would help to understand these terms…

Moloch: Moloch has appeared in literature in a variety of forms. The Canaanite god Moloch was the recipient of child sacrifice according to the account of the Hebrew Bible. Moloch is depicted in John Milton’s epic poem Paradise Lost as one of the greatest warriors of the rebel angels, vengeful and militant.

In the 19th century, “Moloch” came to be used allegorically for any idol or cause requiring excessive sacrifice. Bertrand Russell in 1903 used Moloch to describe oppressive religion, and Winston Churchill in his 1948 history The Gathering Storm used “Moloch” as a metaphor for Adolf Hitler‘s cult of personality.

In modern usage it denotes a tyrannical power, such as “the great Moloch of war” or “duty has become the Moloch of modern life.” Liv Boeree, the interviewer and an expert in game theory, defines Moloch as the God of unhealthy competition.

Meta-crisis: The meta-crisis is an entangled series of crises—ecological, psychological, spiritual, cultural, governmental, and economic. The meta-crisis is all of these and not reducible to any one of them alone. AI is not one of the risks embedded within the meta-crisis; it is an accelerant of all of them.

The meta-crisis is a self-accelerating phenomenon that grows more and more complex each day. For example, ChatGPT was version 3.5 when it was launched on the internet a few months ago. Since then, version 4 has been made available. Although ChatGPT4 has access to current information (unlike 3.5 which was limited to pre-2021) version 4 is still only programmed to do certain kinds of things.

The next step is AGI (Artificial General Intelligence), which will be fully autonomous and therefore immune to any human efforts to pull the plug. It will be able to set its own goals, independent from ours, and then take steps to implement actions toward those goals. It won’t matter if we like their goals or not. The concern of Schmachtenberger, along with many others, is that AGI will be intelligence unbound by wisdom (more below).

Compounding the meta crisis is technology—technology that makes us more distracted, divided, and confused, thereby reducing our ability to act wisely. And yet, paradoxically, this same technology gives us god-like powers which increase the need to act wisely. A very good talk: Confronting The Meta-Crisis: Criteria for Turning The Titanic – Terry Patten speaking at Google

The Alignment Problem: Misalignment is a challenging, wide-ranging problem to which there is currently no known solution. As AI systems get more powerful, they don’t necessarily get better at dooing what humans want them to.

For example, large language models such as OpenAI’s GPT-3 and Google’s Lamda get more powerful as they scale. When they get more powerful, they exhibit novel, unpredictable capabilities—a characteristic called emergence. Alignment seeks to ensure that, as these new capabilities emerge, they continue to align with the human goals the AI system was designed to achieve.

The problem comes from a misalignment of intelligence and wisdom. Any system can be misaligned, even one that is highly intelligent, if the wisdom piece is missing. Think back to Mo Gawdat and his idea about teaching human values to our AI. That solution is aimed at addressing the alignment problem by teaching wisdom to our AI.

Intelligence and wisdom…

At this point, it is worth interjecting Schmachtenberger’s discussion of intelligence and wisdom  in another interview (starting at 2:46:25). From his deep-and-wider context, here are the key points:

  • It is fair to say that human intelligence, unbound by wisdom, is the cause of the meta-crisis.
  • This same intelligence has created all the technologies—the agricultural, industrial, digital, nuclear weapons, energy harvesting…
  • It also made the system of capitalism, of communism, of…
  • This type of intelligence takes our physical (corporeal) capacities and extends them considerably—in the way a fist is extended through a hammer, or an eye is extended through a microscope or telescope (extra-corporeal).
  • And now, the type of intelligence that does this “is having the extra-corporeal intelligence be that type of intelligence itself—in maximum recursion, not bound by wisdom, driven by international, multipolar, military traps and markets.”
  • The narrow optimization it fosters is very dangerous.
  • This system is structured to perpetuate narrow short-term goals at the expense of long-term wide values. The question is, what goals are worthy of optimization?
  • What we need is systems of collective intelligence and wisdom that are based on the thriving of life in all perpetuity. Nothing less will be effective.
  • Intelligence has to be bound by wisdom.
  • Wisdom requires more than just being able to attune to the known metrics, and more than just the optimization and logic processes of those metrics.
  • Wisdom will always be bound to restraint.
  • Wisdom is more possible at smaller scale, where people can be in richer relationships with each other,
  • Understanding the limits of our own models is wisdom. There are aways unknowns that models cannot account for.

Watch the interview… Misalignment, AI & Moloch  March 30, 2023

~ If you found this post of value, please share it with someone. ~

Thinking Differently & Why It Matters

Before I start, let me be clear that this post is not about artificial intelligence (AI). It’s about navigating life as AI becomes smarter than us.

The ChatGPT backstory…

Last week I mentioned a recent development that has disrupted our comfortable lives—the release of ChatGPT (Chat Generative Pre-Trained Transformer). To elaborate:

ChatGPT is a large language model designed to produce natural human language. Much like having a conversation with someone, you can talk to ChatGPT, and it will remember things you have said in the past while also being capable of correcting itself when challenged. … ChatGPT was trained using a mix of machine learning and human intervention, using a method called reinforcement learning from human feedback (RLHF). Take note of this last point because it is relevant to the proposal in the video that follows.

After being in the works for several years, ChatGPT was released on the internet by OpenAI in November 2022. Now, the summer of 2023, it’s creating a buzz everywhere.

Initial public concerns were about the ease of fraudulent communications—written, visual, and voice. This could show up in activities such as impersonating others, students submitting essays they didn’t write or even fact-check, and misleading news reports. Granted, these things already happen with our present technology, but the sophistication of AI makes it almost instantaneous, very convincing, readily accessible, and highly pervasive.

These characteristics of AI have raised concerns such as: How will we know what is real any more? What is real anyway? Does it matter? What will happen to jobs? Especially my job?

Then in March 2023, the Future of Life Institute published an open letter calling for “...all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” According to a CityyNews article, it was signed by “more than 1,000 researchers and technologists, including Elon Musk.

That letter was followed by several highly publicized cautions from early developers such as Prof Yoshua Bengio and Dr. Geoffrey Hinton. These and many other voices of industry insiders have raised the really big questions: Are we past the point of no return? (Yes) How long will it take for AI to be smarter than humans? (5 years, maybe less)

It’s scary…

Whereas the experts are deeply concerned because of what they know from working in the field, the rest of us experience the disempowerment of fear without knowledge. That makes fear doubly scary in any situation.

The logic of fear…

The logic of fear is that a moment in the future is less safe for me than this moment.

Therefore, it’s safer for me to preserve the conditions of this moment, to keep everything the same, to hold the line against change.

The fallacy…

The fallacy of this viewpoint is that it’s not the nature of nature to stay the same. We’ve all heard the truism that nothing is certain but change. So we are wasting a lot of energy when we fear the future and try to keep things the way they are.

A more constructive approach…

Given what experts are saying about the inevitability of AI’s rapid evolution, it’s unrealistic to think we can hold the line at the existing level of artificial intelligence (social media, chatbots as customer service contacts, GPS on our mobile phones, autocorrect on text messages, digital assistants, and e-payments).

Rather than trying to halt it, our energy will be much better spent in thinking about how we can navigate life with AI to achieve the best outcome for us, the planet, and humanity.

Sounds good in theory, but it seems huge beyond anything we can do. How will we ever be able to navigate what’s next?

The value of thinking differently…

When we don’t know what to do, the best thing is to change our perspective, to look at the situation from a different viewpoint. As Einstein famously said, we can’t solve a problem from the same level of consciousness that created it.

Mo Gawdat is approaching the AI situation from a higher consciousness. He’s a former industry insider who is thinking differently about how we can navigate life as AI becomes smarter than us.

It’s a radically different viewpoint, but who’s to say it wouldn’t work? As noted at the beginning of this post, ChatGPT was trained using a mix of machine learning and human intervention, using a method called reinforcement learning from human feedback (RLHF). As explained on the OpenAI website, this is “a method that uses human demonstrations and preference comparisons to guide the model toward desired behavior.” This suggests to me that we can play a constructive role here.

For a description of the Prisoner’s Dilemma, a concept in game theory in which everyone has an incentive to defect in their own self-interst and, as a result, everyone is worse off.

To hear Mo Gawdat speak at greater depth about this idea, go to 1:50 of his interview with Tom Bilyeu. For more context, start earier, at 1:18.

Moving forward…

In times when every “next thing” presents another big challenge that seems beyond our control, we will navigate more easily if we consciously avoid a knee-jerk fear reaction and instead put our attention on discovering empowering possibilities for action. 

As Mo Gawdat says, “We might as well…ask them [AI] to do what’s good for us, good for the planet, and good for humanity.” 

From my perspective, our human superpower is that we can choose to think differently—and thus create the more beautiful world we know in our hearts is possible.

If you found this of value, please share it with someone.