UPDATE May 17, 2023: Proper funding for a new federal AI agency is needed to match the tech industry’s speed and power. The name for the prospective agency and a map of its possible functions are yet to be determined. But does having one agency regulate all AI make sense? – vs. adding AI oversight to the existing federal regulatory framework.
• Wired > “Spooked by ChatGPT, US Lawmakers Want to Create an AI Regulator” by Khari Johnson (May 17, 2023) – At a Senate Judiciary subcommittee hearing, senators from both parties and OpenAI CEO Sam Altman said a new federal agency was needed to protect people from AI gone bad.
The lords of AI … an International Agency for AI?
• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “Gary Marcus used to call AI stupid – now he calls it dangerous” (May 5, 2023) – There’s a difference between power and intelligence.
Marcus [1], always loquacious, has an answer: “Yes, I’ve said for years that [LLMs] are actually pretty dumb, and I still believe that. But there’s a difference between power and intelligence. And we are suddenly giving them a lot of power.”
Marcus has an idea for who might do the enforcing. He has lately been insistent that the world needs, immediately, “a global, neutral, nonprofit International Agency for AI,” …
The success of large language models like OpenAI’s ChatGPT, Google’s Bard, and a host of others has been so spectacular that it’s literally scary. This week President Biden summoned the lords of AI to figure out what to do about it. Even some of the people building models, like OpenAI CEO Sam Altman, recommend some form of regulation. And the discussion is a global one; Italy even banned OpenAI’s bot for a while.
Toward a regulatory framework for AI …
• The Technology 202 > “Biden’s enforcers see antitrust threats in AI rush” by Cristiano Lima with research by David DiMolfetta (May 9, 2023) – Will a small group of large tech companies – with power and resources that rival nation-states – corner the AI market?
Key officials including Justice Department antitrust chief Jonathan Kanter and Federal Trade Commission Chair Lina Khan have issued several warnings against potential anti-competitive abuses by companies as they look to grow their AI businesses.
Khan issued a more pointed warning last week, writing in an op-ed in the New York Times that, “The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms.”
Khan recently touted the agency’s creation of an Office of Technology as crucial toward their AI work.
Related articles
• CNET > “Google’s Bard Chatbot Opens to the Public” by Stephen Shankland [2] (May 10, 2023) – Google is trying to balance AI progress with caution.
Google is ready to open the Bard floodgates, at least to English speakers around the world. After two months of testing, access to the AI-powered chatbot no longer is gated by a waitlist.
• Wired > “How ChatGPT and Other LLMs Work – and Where They Could Go Next” by David Nield (Apr 30, 2023) – Large language models like AI chatbots seem to be everywhere. If you understand them better, you can use them better.
Notes
[1] Gary Marcus: one of the “go-to talking heads on this breakout topic” … “53-year-old entrepreneur and NYU professor emeritus who now lives in Vancouver” … TED talk on constraining AI… Substack “The Road to A.I. We Can Trust” … podcast Humans vs. Machines. For his 23 years at NYU, he was in psychology, not computer science. … cofounded an AI company called Geometric Intelligence (sold to Uber in 2016) … cofounded a robotics firm, Robust AI, which he left in 2021.
History: deep learning neural networks vs. old-school AI, based on reasoning and logic … with some references to Geoffrey Hinton, known as the godfather of deep learning, …
[2] Stephen Shankland has been a reporter at CNET since 1998 … covering the technology industry for 24 years and was a science writer for five years before that.
A chatbot with rules that choose the response for the greater good … positronically …
• Wired > “A Radical Plan to Make AI Good, Not Evil” by Will Knight (May 9, 2023) – OpenAI competitor Anthropic says its Claude chatbot has a built-in “constitution” that can instill ethical principles and keep systems from going rogue.
Related
• Wired > “How To Delete Your Data From ChatGPT” by Matt Burgess (May 9, 2023) – OpenAI has new tools that give you more control over your information—although they may not go far enough.
• Wired > “What Really Made Geoffrey Hinton Into an AI Doomer” by Will Knight (May 8, 2023) – The AI pioneer is alarmed by how clever the technology he helped create has become. And it all started with a joke.
In his latest article, Steven Levy draws a parallel between his assessment of the first iPhone in 2007 (before rise of the app store) and the current state of chatbots. A failure of foresight. And “prompt-and-pronounce” stunts.
• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “You’re probably underestimating AI chatbots” (May 12, 2023) – We risk failing to anticipate the potential trajectories of our AI-infused future.
This article discusses a forecast for the industrial cost of AI services – a massive increase, despite ongoing improvements in hardware performance [1] – “As demand for GenAI continues exponentially.”
Will future personal computers and smartphones carry some of the load?
• Forbes > “Generative AI Breaks The Data Center: Data Center Infrastructure And Operating Costs Projected To Increase To Over $76 Billion By 2028” by Jim McGregor, Contributor; Tirias Research, Contributor Group
Notes
[1] Cf. Hot Chips semiconductor technology conference
How to “mitigate the dark side of AI?”
Reference: Office of Science and Technology Policy > Blueprint for an AI Bill of Rights – Making Automated Systems Work For The American People
• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “Everyone wants to regulate AI. No one can agree how” (May 26, 2023) – We blew it when it came to regulating social media, so let’s not mess up with AI.
A vision for how to regulate AI …
• The Washington Post > “The Technology 202” by Cristiano Lima (May 30, 2023)
Legislation and guardrails … scaffolding for AI regulation.
• Washington Post > “Europe moves ahead on AI regulation, challenging tech giants’ power” by Cat Zakrzewski and Cristiano Lima (June 14, 2023) – European lawmakers voted to approve the E.U. AI Act, putting Brussels a step closer to shaping global standards for artificial intelligence
Key points
What’s happening in the US Congress on AI legislation?
The Lieu Review 6-23-2023
• YouTube > Rep. Ted Lieu > “Rep Lieu Discusses Need for Federal Regulation of Artificial Intelligence on Msnbc’s Morning Joe” (June 20, 2023) [1]
Notes
[1] Transcript
US Congressman Rep Lieu discusses need for federal regulation of artificial intelligence on msnbc’s morning joe 6-20-2023
welcome back to “morning joe.”
president biden will be in san francisco later today to meet with artificial intelligence experts to learn more about the growing technology.
this meeting comes as “politico” reports dozens of democratic strategists scattered recently to discuss the coming election.
however, their focus wasn’t in president biden or donald trump but, rather, how to combat disinformation spread by artificial intelligence in 2024.
currently, there are no restrictions over using a.i. in political ads and campaigns are not required to disclose when they use the technology. that has led some strategists sounding the alarm on the unregulated, new innovation.
let’s bring in democratic congressman ted lieu of california. he’s been calling for regulations over a.i., and he is proposing a bill. we want to get to that in a moment.
congressman, explain the danger of just in political campaigns, of the use of unregulated a.i.
>> thank you for your question. as a recovering computer science major, i’m fascinated with a.i. and all the good things it is going to do for society. it can also cause harm, and i think that’s why it is important that we have regulations and laws that allow a.i. to innovate but prevent avoidable harms and put in guardrails.
we also have to be humble and understand there’s a lot we don’t know.
as members of congress, we have to acknowledge that we have to have experts sometimes advise us on new technologies. that’s why later this morning, i’m creating an a.i. commission, that’s a bipartisan bill, and it’ll be carried on the senate side, as well. it’ll look at what a.i. we might want to regulate and how we might want to go about doing so, including a.i. for use in political campaigns.
>> congressman, speak to us about the challenges of trying to regulate something that is developing so rapidly. a.i. is expanding seemingly by the day. technology improves by the day. how hard is it going to be to wrap your arms around something that is evolving so quickly?
>> that is a great question. i don’t know we’d even know what we were regulating because it is moving so quickly. look at the applications that came out since chat gpd debuted. it is hundreds and probably thousands by now. some of these harms may, in fact, happen, but maybe they don’t. maybe we see some new harm. i think it is good to have some time pass.
it is good to have a commission of experts advice us.
if we make a mistake as a member of congress and writing legislation, you need another act of congress to correct that.
>> for americans who really know nothing about this, can you talk a little bit about your greatest areas of concern? maybe some examples of ways this technology could run amuck, could cause problems? what was it that you heard that made you say, we need to look at this more closely?
>> sure. as a legislative, i view this as two bodies of water. a big ocean of a.i. and the small lake. in this big ocean, there’s all the a.i. we don’t care about. a.i. and smart toaster as a preference for english muffins over wheat toast, we don’t care about that.
in the small lake, there’s a.i. we care about. you ask, why would we want to care about that?
first, there’s a preference that might cause harm to society, such as facial regular in addition, which is amazing technology but it is bias toward people with darker skin. if you deploy that nationwide with law enforcement agencies, you’ll have a violation because minorities will be misidentified as higher rates.
i introduced legislation for guardrails on that. that’s an example of harm a.i. can cause.
>> thank you. >> congressman, claire mccaskill here. i’m concerned about political campaigns. as you well know, the most powerful weapon in a political campaign is video of the candidate speaking in their own words.
many people don’t do town halls during congress because they’re afraid their tracker will get them on film in a moment they say something awkward or misspeak, and it can be used against them later.
i have a sense of urgency about what is going to happen in the next cycle, when people start airing commercials of candidates speaking words they never said. what would your legislation do for that?
is there any urgency to move, at least on whether you have to disclose a.i. being used in advertising?
>> thank you for your question. nothing in the bill precludes congress from acting in discreet areas of a.i. regulation.
i also note that there is a.i. that can counter bad a.i. for example, you have some companies working on a.i. that can authenticate videos and original images, so that could be something that campaigns can use.
in addition, i support legislation that requires disclosure on ads and social media and so on. next time, for example, if you see a pro trump ad, it might say at the bottom, paid for by the kremlin. that’s a disclosure we’d like to see.
>> okay, yeah. that would be good.
Will pledges by the most influential AI tech companies – to mitigate the risks of emerging AI platforms – lead to industry standards? What’s the history of keeping their safety and security commitments? What could go wrong, eh.
• Washington Post > “Top tech firms sign White House pledge to identify AI-generated images” by Cat Zakrzewski (July 21, 2023) – Google, and ChatGPT-maker OpenAI agreed to the voluntary safety commitments, e.g., watermarking.
The tenor of AI regulation debate in Congress: the elephant in the room – Cold War missle gap redux
• The Washington Post > The Technology 202 (email newsletter) > “China is the elephant in the room in the AI debate” by Cristiano Lima (July 27, 2023)
The original Gilded Age (a term coined by Mark Twain for the period ~1877 – 1900): steam power, electric power, telephone & telegraph, and the tempo of life. Then as now, market concentration, wealth disparity, fake news (yellow journalism), … But now at a scale …
• The Washington Post > The Technology 202 (email newsletter) > “We’re living in a ‘Digital Gilded Age,’ former FCC chair says” by Cristiano Lima (Oct 12, 2023) – The original Gilded Age was driven by a technological revolution.
So, a balance between AI supremacy and harms.
• House.gov > The Lieu Review, Weekly Updates from Congressman Ted Lieu (October 11, 2024)