Steven Levy commented this week on those heralding a “worst-case scenario” for AI – “how artificial intelligence might wipe out humanity.”
At a gathering in New York City organized by the Center for Humane Technology (CHT) [1], a “doom-time presentation” evoked an apocalyptic tone:
We were told that this gathering was historic, one we would remember in the coming years as, presumably, the four horsemen of the apocalypse, in the guise of Bing chatbots, would descend to replace our intelligence with their own.
A call to action? A test of our attention spans? – as if social media wasn’t harmful enough already – so, the Social Dilemma and now also the AI Dilemma.
• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “How to start an AI panic” (March 10, 2023) – What’s most frustrating about this big AI moment is that the most dangerous thing is also the most exciting thing.
The Center’s cofounders [Tristan Harris, former design ethicist at Google, and Aza Raskin, entrepreneur, interface designer, …] repeatedly cited a statistic from a survey that found that half of AI researchers believe there is at least a 10 percent chance that AI will make humans extinct.
I suspect this extinction talk is just to raise our blood pressure and motivate us to add strong guardrails to constrain a powerful technology before it gets abused.
As to the struggle to contain powerful technology, Levy notes:
Holding researchers and companies accountable for such harms is a challenge that society has failed to meet.
In the Time Travel section of his newsletter, he concludes with a quote from a 1992 interview – regarding the future of artificial life – with scientist Norman Packard of the Santa Fe Institute. Packard waxed philosophical about “blips” in our biosphere on a timescale of billions of years: “The biosphere would get jostled around a little bit …”
We’ve been here before? What’s the track record for other recent technologies.
Keep “Frankenstein‘s creation” under wraps?
As documented in Walter Isaacson’s book The Code Breaker [2], early leaders in the field of genetic engineering advocated a pause, or prudent pace, in their field. Some reckless researchers got into the mix anyway [3].
Will AI development, especially by commercial actors, do any better? Especially considering how the US Congress still grapples even with Section 230 protections.
The dark side of our new information technology is not that it allows government repression of free speech but just the opposite: it permits anyone to spread, with little risk of being held accountable, any idea, conspiracy, lie, hatred, scam, or scheme, with the result that societies become less civil and governable. – Isaacson, Walter. The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race (p. 359). Simon & Schuster 2021. Kindle Edition.
Or, as profiled in this article about virtual reality, who will regulate norms for virtual spaces, who will be accountable for social harms, especially to kids.
• Washington Post > “Meta doesn’t want to police the metaverse. Kids are paying the price.” by Naomi Nix (March 8, 2023) – Experts warn Meta’s moderation strategy is risky for children and teens exposed to bigotry and harassment in Horizon Worlds.
Meta Global Affairs President Nick Clegg has likened the company’s metaverse strategy to being the owner of a bar. If a patron is confronted by “an uncomfortable amount of abusive language,” they’d simply leave, rather than expecting the bar owner to monitor the conversations.
The “bar owner” approach to handling risks of powerful technologies is a questionable metaphor. But it makes a point: marketplace actors don’t want to be agents on a slippery slope. There’s no profit in that. “Not my job.” Yet, as the same time, they want protection in order to conduct business safely on a level playing field. And reputational equity, in a wider context of social norms.
The book The Narrow Corridor [4] talks about the challenge of moving into and staying in the “sweet spot” of the Shackled Leviathan (State), wherein there’s a balance between (state + elite) power and societal power – equitably exercising control & fairly (peacefully) resolving societal conflicts. Without that balance, a slippery slope kicks in, moving toward a despotic state or a cage of norms – in either case with a loss of liberty, a less vital (and less sustainable) state or a less vital (and less sustainable) society. “No easy feat.” New powerful technologies can destabilize an existing order.
Notes
[1]
• Wiki > Center for Humane Technology
Launched in 2018, the organization gained greater awareness after its involvement in the Netflix original documentary The Social Dilemma, which examined how social media’s design and business model manipulates people’s views, emotions, and behavior and causes addiction [maximizing users’ time on devices], mental health issues, harms to children, disinformation, polarization, and more.
[2] Isaacson, Walter. The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race. Simon & Schuster 2021. Kindle Edition.
Particularly discussions about the “moral minefield:”
A. The germline as a red line, “as a firebreak that gives us a chance to pause.”
B. Treatment vs. enhancement (re financial inequality)
C. Who should decide.
D. Utilitarianism.
These contrasting perspectives form the most basic political divide of our times. On the one side are those who wish to maximize individual liberty, minimize regulations and taxes, and keep the state out of our lives as much as possible. On the other side are those who wish to promote the common good, create benefits for all of society, minimize the harm that an untrammeled free market can do to our work and environment, and restrict selfish behaviors that might harm the community and the planet. – Ibid. p. 357.
• MIT Technology Review > “More than 200 people have been treated with experimental CRISPR therapies” by Jessica Hamzelou (March 10, 2023) – But at a global genome-editing summit, exciting trial results were tempered by safety and ethical concerns.
[3] Re reckless research in genome editing, this article notes: “The message was loud and clear: Scientists don’t yet know how to safely edit embryos.”
• Wired > “It’s Official: No More Crispr Babies – for Now” by Grace Browne (Mar 17, 2023) – In the face of safety risks, experts have tightened the reins on heritable genome editing – but haven’t ruled out using it someday.
This marks a shift in attitude since the close of the last summit, in 2018, during which Chinese scientist He Jiankui dropped a bombshell: He revealed that he had previously used Crispr to edit human embryos, resulting in the birth of three Crispr-edited babies – much to the horror of the summit’s attendees and the rest of the world. In its closing statement, the committee condemned He Jiankui’s premature actions, but at the same time it signaled a yellow rather than red light on germline genome editing – meaning, proceed with caution. It recommended setting up a “translational pathway” that could bring the approach to clinical trials in a rigorous, responsible way.
[4] Acemoglu, Daron; Robinson, James A.. The Narrow Corridor. Penguin Publishing Group 2020. Kindle Edition.
What happens when tech companies go beyond hosting and organizing users’ speech via search engines? When does their conduct become the question? Do Chatbots create or develop (author vs. share) content?
• The Washington Post > The Technology 202 > “AI chatbots won’t enjoy tech’s legal shield, Section 230 authors say” by Cristiano Lima (March 17, 2023) – Will tech companies’ liability shield apply [like for search engines] to tools powered by artificial intelligence, like ChatGPT?
Will AI technology play out as good, bad, & ugly like social media? Weaponized as well – misinfo / disinfo wars – bad actors?
“An amplifier of humans” – what could possibly go … managing the pace of change … the interplay of state & society … the 2024 election …
• CNBC > “OpenAI CEO Sam Altman says he’s a ‘little bit scared’ of A.I.” by Rohan Goswami (Mar 20, 2023) – “We can have a much higher quality of life, standard of living,” Altman said. “People need time to update, to react, to get used to this technology.”
* OpenAI CEO Sam Altman said he’s a “little bit scared” of technology such as OpenAI’s ChatGPT, in an interview with ABC News.
* Altman said he’s concerned about potential disinformation and authoritarian control of AI technology, even though AI will transform the economy, labor and education.
• ABC News > Video interview (~21′) > “OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: ‘A little bit scared of this’” by Victor Ordonez, Taylor Dunn, and Eric Noll (March 16, 2023) – Altman sat down for an exclusive interview with ABC News’ chief business, technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 – the latest iteration of the AI language model.
As if plastic pollution of the biosphere wasn’t bad enough … a business ethic of fake it until you make it …
Fake goods, fake deals, fake caller IDs, fake email messages, fake audio, fake video, … a frenzied “firehose of falsehood.”
While excited about the potential for generative AI to change the way we work and help us be more creative, a business professor – a would-be “AI whisperer” – worries that this proliferation (at scale) will supercharge propaganda and influence campaigns by bad actors.
• NPR.org > “It takes a few dollars and 8 minutes to create a deepfake. And that’s only the start” by Shannon Bond (March 23, 2023) – Sure, the professor’s delivery is stiff and his mouth moves a bit strangely. But if you didn’t know him well, you probably wouldn’t think twice.
The video is not [the professor’s]. It’s a deepfake Mollick [Ethan Mollick, a business professor at the University of Pennsylvania’s Wharton School] himself created, using artificial intelligence to generate his words, his voice and his moving image.
“It was mostly to see if I could, and then realizing that it’s so much easier than I thought,” Mollick said in an interview with NPR. … He now requires his students to use AI and chronicles his own experiments on his social media feeds and newsletter.
Tools used to create the demo (at a cost of $11):
ChatGPT
Voice cloner
AI video synthesizer (using photo and audio file)
Further signs that Europe is moving faster than the United Staes on oversight of AI tech: The EU considers as “high risk” some uses of generative AI, seeking a regulatory framework (throughout the AI product cycle). And potential harms from industry consolidation.
Watch out for those legal disclaimers for user-facing AI apps, eh. The primrose path …
• Washington Post > “AI experts urge E.U. to tighten the reins on tools like ChatGPT” – Analysis by Cristiano Lima with research by David DiMolfetta (April 13, 2023)
If you believe the AI buzz (and the headlines) … This editorial piece discusses the contrasting faith and fatalism of OpenAI’s CEO and two co-founders of the Center for Humane Technology – comparing “generative AI … to the creation of the atom bomb.”
Digital Revolution redux. An unstoppable technology … how many people have the “launch codes” … what’s the discrete “blast radius” … and localized unintended consequences (fallout) …
• Wired > System Update (newsletter) > “Is Generative AI Really This Century’s Manhattan Project?” by Gideon Lichfield, Global Director, WIRED (4-6-2023)
Experimenting with genetics and consciousness that both evolved over millions of years … what could go wrong?
Here’s an interesting comparison, a historical perspective, on the slippery slope of potentially dangerous new technology. Genetic engineering research had a moment of pause. What makes generative AI different?
What does it take for a group of international researchers to call for a moratorium in their field? Let alone make that appeal practical. And also get cooperation from private companies working on applications of that research?
• LA Times > Opinion > “DNA scientists once halted their own apocalyptic research. Will AI researchers do the same?” by Michael Rogers [1] (June 25, 2023) – Will AI follow the ethical path pioneered by DNA scientists?
Key points
1. Both DNA and AI letters raised a relatively specific concern which quickly became a public proxy for a whole range of political, social and even spiritual worries.
2. The recombinant DNA letter led to a four-day meeting at the Asilomar Conference Grounds on the Monterey Peninsula (where researchers approved guidelines which were later codified into workable rules).
3. The artificial intelligence challenge is a more complicated problem. Much of the new AI research is done in the private sector … the AI rules will probably be drafted by politicians.
4. Genetic engineering has proven far more complicated [with unfolding complexity] than anyone expected 50 years ago.
5. … like the genome, consciousness will certainly grow far more complicated the more we study it.
Notes
[1] Michael Rogers is an author and futurist whose most recent book is “Email from the Future: Notes from 2084.” His fly-on-the-wall coverage of the recombinant DNA Asilomar conference, “The Pandora’s Box Congress,” was published in Rolling Stone in 1975.
AI safety
So, in guarding against adversarial attacks on AI chatbots, is a policy of “gradualism” realistic? – counting on time to gradually fine-tune AI models. Layers of defense?
• Wired > “A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It” by Will Knight (Aug 1, 2023) – The propensity for the cleverest AI chatbots to go off the rails isn’t just a quirk that can be papered over with a few simple rules [or blocks].