Saturday, March 16, 2024

Can AI Reduce the Climate Emergency? Maybe.

Readers of this blog know the main obstacle to reducing the climate emergency is lack of political will in developed nations. However, recent advances in Artificial Intelligence (AI) show AI could help decision-makers at all levels, in all places, reduce greenhouse gases/climate impacts as long as climate tipping points are not crossed before we get the chance. I expect you to be skeptical like I was so let me explain. Imagine a space alien with a billion times more intelligence than any human who ever lived arriving to advise humanity. According to some of the world's best computer experts, this is exactly where AI is taking us. "By 2049 AI will be a billion times more intelligent than humans" wrote Mo Gawdat, author of the 2021 book Scary Smart, and former chief business officer for Google X, the group responsible for self-driving cars. 

After listening to AI-themed videos, and reading various sources, I think seven main challenges using AI to solve climate issues may be: 1) intended or unintended negative effects (in the case of bad actors pirating the technology for profit or terrorism); 2) unintended negative effects by those trying to help; 3) convenient excuse for developed nations to ignore human rights and equity considerations because they can say "blame the AI;" 4) convenient excuse to avoid cuts to global carbon and methane emissions, already reaching dangerously high levels, because of too much government and/or corporate faith in AI; 5) inaccurate reporting of AI results to the public due to political or corporate filtering; 6) "The Obscene Energy Demands of A. I." as noted in Elizabeth Kolbert's March 9, 2024 essay in The New Yorker; and 7) AI may be used to protect ultrarich from billions of desperate humans as tipping points are crossed, and climate emergency increases. 

For those new to understanding AI, Cleo Abram gave an excellent summary of "What We Get Wrong About AI (feat. former Google CEO [Eric Schmidt])." This 12 minute, 40 second YouTube, with 749,666 views since Aug 3, 2023. begins with Sundar Pichai, Google CEO and CEO of its parent company, Alphabet, noting AI for humanity is "more profound than fire or electricity." Abram's YouTube includes a March 19, 2023 cnet.com article by Daniel Van Boom with a headline,"ChatGPT Can Pass the Bar Exam. Does That Actually Matter?" The article notes, "In mid-March, artificial intelligence company OpenAI announced that, thanks to a new update, its ChatGPT chatbot is now smart enough to not only pass the bar exam, but score in the top 10%." 

AI's more detailed explanation is the nearly 3 hour YouTube on Tom Bilyeu's Impact Theory, "MEGATHREAT: Why AI Is So Dangerous & How It Could Destroy Humanity | Mo Gawdat." Posted June 20, 2023, it has 1,145,110 views. I watched the entire YouTube for climate implications. 

At 36:07 on the timeline Gawdat says, "That exponential growth is just mind boggling because the growth on the next chip in your phone is going to be a million times more than the computer that put people on the moon. [ . . . . ] I remember in my google years when we were working on Sycamore, google's quantum computer, Sycamore performed an algorithm that would have taken the world's biggest supercomputer 10,000 years to solve, and it took Sycamore [ . . . ] 200 seconds." 

These technology breakthroughs remind me of my October 28, 2017 post noting, "Last night I watched The Imitation Game about British codebreaker Alan Turing deciphering the Nazi's Enigma machine code.  The code was considered 'unbreakable' because of huge obstacles including, as the linked Enigma video notes, 'If you had 100,000 people with 100,000 Enigma machines, all testing different settings [ . . .], test a different setting once a second 24 by 7, it would take twice the age of the universe to break the code.'  In other words, as multiple sources noted, it would take finding 'one of these 15 billion billion settings.' [par break] However, Turing's team broke it [ . . . . ]."

I also wrote, "The beautiful 'flaw' (feature, not a bug) is conscience.  The Internet offers speed. Reducing carbon use is the goal." Could AI generate technical/political/social answers regarding the climate emergency? I don't know. Humans are stubborn, but listening to an intelligence a billion times smarter will be worth a try.

I wrote in my April 12, 2023 post, "instead of the millions of lives Turing saved [in World War II by inventing the theory for the first computer], the number would now be in the billions." 

At 1:07:08 on the timeline Gawdat continues, "If you look at us today you would think [ . . . ] the biggest idiots on the planet [ . . . ] are destroying the planet not even understanding that they are. Right? You become little more intelligent and you say, 'I'm destroying the planet but it's not my problem, but I undertstand that I'm destroying it.' Okay? You get a little more intelligent and you go like 'No, no, no. Hold on. I'm destroying the planet. I should stop doing what I'm doing.' You get even more intelligent then you say, 'I'm destroying the planet. I should do something to reverse it.' [ . . . . ] The eco-challenge that we go through is not needed. [ . . . . ] Getting together just requires a little more intelligence, a little more communication, [ . . . ] a better presentation of the numbers so that every leader around the world suddenly realizes 'Yeah, it doesn't look good for my country in 50 years time.' The reality of the matter is that as AI goes through that trajectory of more and more and more intelligence, zooms through human stupidity, to [ . . . ] best IQ, beyond humans' intelligence, [ AI machines] will by definition have our best interests in mind, have the best interest of the ecosystem in mind. Just like the most intelligent of us don't want us to kill the giraffes, and [ . . . ] the other species that we're killing every day, a more intelligent AI than us will behave like the intelligence of life itself [ . . . . ]"

The entire video is worth seeing for many reasons. While I disagree with Gawdat's idea that just planting more trees will solve our climate issues, I deeply respect most of his other points, spiritual beliefs in Sufism, and vision to use AI to find complex answers currently beyond the capacity of human minds. 

Even at a billion times the intelligence of humans by 2049, it is unreasonable to expect AI to have the compassion and justice of the Creator of everything seen and unseen in all directions forever whose will can not be undone. AI will be able to perform what seems like miracles, but God is accessible in the present moment to everyone willing to listen, sans expensive technology and supercomputers cooled to just above absolute zero (-459 degrees Fahrenheit). His data can not be corrupted, and no virus can destroy it. It survives the death of galaxies. 

Sufi poet Rumi was quoted, "Ecstatic love is an ocean, and the Milky Way is a flake of foam floating on it." I first saw that quote in The Kabir Book by Robert Bly. Rumi's poem "An Empty Garlic," used with permission of  translator Coleman Barks, is one of the most-visted posts on this blog.

Gawdat said at 2:45:53 on the timeline, "But I will always ask myself this question: 'if what I'm using is ethical, healthy, and human?' And this is a question that I ask every single individual listening to us. Please do not use unethical AI. Please do not develop unethical AI. Please don't fall in a trap where your AI is going to hurt someone. One of the things I ask of governments is if something is generated by AI, it needs to be marked as AI [ . . . ]"

I respected that in the video Gawdat said he turned away from having a garage with 16 cars to giving away most of what he earns. This reminded me of the 2010 documentary I Am, I mentioned before, where the Dalai Lama said the most important meditation of our time is "critical thinking followed by action."

No comments:

Post a Comment