[Update: February 8, 2023 – see comments as well]
Following the company’s investment in OpenAI, Microsft has released OpenAI-infused new versions of Bing and Edge.
• LA Times > “Microsoft unveils Bing and Edge with OpenAI technology” by Dina Bass (Feb 8, 2023) – Tech giant upgraded its search engine and browser in hopes of gaining ground on the Google juggernaut.
“This technology is going to reshape pretty much every software category,” Microsoft Chief Executive Satya Nadella said at an event Tuesday at the company’s Redmond, Wash., headquarters. It’s “high time” innovation was restored to internet search, he said.
[Original post January 30, 2023]
Much media buzz about ChatGPT since December 2022. Lots of $$$ already in play.
In last week’s newsletter from my US Congressman, Rep Lieu referenced an op-ed in which he discusses the pros and cons of AI, and a press release about using ChatGPT to draft a resolution.
• House.gov > Media Center > Editorials > “New York Times Op-Ed: I’m a Congressman Who Codes. A.I. Freaks Me Out.” (January 23, 2023) – I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated.
(quote) Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits.
I didn’t write the above paragraph. It was generated in a few seconds by an A.I. program called ChatGPT, which is available on the internet. I simply logged into the program and entered the following prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated.”
… I will be introducing legislation to create a nonpartisan A.I. Commission to provide recommendations on how to structure a federal agency to regulate A.I., what types of A.I. should be regulated and what standards should apply.
• House.gov > Media Center > Press Releases > “Rep Lieu Introduces First Federal Legislation Ever Written by Artificial Intelligence” (Jan 26, 2023)
(quote) WASHINGTON – Today, Congressman Ted W. Lieu (D-Los Angeles County) introduced the first ever piece of federal legislation written by artificial intelligence. Using the artificial language model ChatGPT, Congressman Lieu offered the following prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.” The resulting resolution introduced today is the first in the history of Congress to have been written by AI.
As yesterday’s Wired newsletter noted:
(quote) Bots are nothing new, but ChatGPT is unusually slick with language thanks to a training process that included digesting billions of words scraped from the web and other sources. Its ability to generate short essays, literary parodies, and even functional computer code made it a social media sensation and the tech industry’s newest obsession.
I remember in the 1970’s an interactive terminal-based program named ELIZA, which used Rogerian-like scripts (person-centered psychotherapy) to chat. I was at a college computer center which showcased the chatty program. It was amazing how much personal information visitors (“outside the ivory tower”) provided in such (“human-ish”) conversations.
(quote from podcast article below) [Will Knight] The funny thing is, going back to the very early days of AI, the first chatbots, people were willing to believe that those were human. There’s famously this one that was made at MIT called ELIZA, where it was a fake psychologist and people would tell it their secrets.
(quote from 2nd article below) [Bindu Reddy, CEO of Abacus.AI] Reddy, the AI startup CEO, knows ChatGPT’s limitations but is still excited about the potential. She foresees a time when tools like it are not just useful, but convincing enough to offer some form of companionship. “It could potentially make for a great therapist,” she says.
Do open-ended use of language and somewhat artful conversation evince something’s intelligence? Imagine if one of your dear pets, like a parrot, started talking like ChatGPT, eh. Or online avatars.
FAQ
What is ChatGPT? Chat Generative Pre-Trained Transformer. What’s a generative AI program? A (language) transformer?
What can it do (and not do – shortcomings)? Mimicry without actually understanding how the world works.
What’s the worry? Flaws: nonsense, biases, plagiarism, misinformation, outdated data … fuzzy “guardrails” – data sets and “monsters from the ID” scraped from the web.
Is ChatGPT free? … will it stay free?
Here’s a Wired podcast / transcript which discusses this tech – with an intro that was written by ChatGPT.
• Wired > “How These AI-Powered Chatbots Keep Getting Better” by Wired Staff (Dec 8, 2022) – Gadget Lab discusses the advances in generative AI tools like ChatGPT that make computer-enabled conversations seem more human than ever.
(quote) Will Knight [WIRED senior writer] … the thing that’s really important to remember is that they are just slurping up and regurgitating in a statistically clever way stuff that people have made. … So I think we’re just really, really well designed to use language and conversation as a way to imbue intelligence on something.
Lauren Goode: OpenAI is a super interesting company. It claims its mission is to make AI open and accessible and safe. It started as a nonprofit, but now it has a for-profit arm. … Google owns a company called DeepMind that is working on similar large language models.
Will Knight: … I think there’s a really good argument that these tools should be more available and not just in the hands of these big companies.
Here’s another article about ChatGPT’s quirks / shortcomings.
• Wired > “ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw” by Will Knight (Dec 7, 2022) – “Each time a new one of these models comes out, people get drawn in by the hype,” says Emily Bender, a professor of linguistics at the University of Washington.
(quote) ChatGPT, created by startup OpenAI, has become the darling of the internet since its release last week. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to generate short essays on just about any theme, craft literary parodies, answer complex coding questions, and much more. It has prompted predictions that the service will make conventional search engines and homework assignments obsolete.
Yet the AI at the core of ChatGPT is not, in fact, very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web.
ChatGPT stands out because it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5.
… the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.
But because they mimic human-made images and text in a purely statistical way, rather than actually learning how the world works, such programs are also prone to making up facts and regurgitating hateful statements and biases – problems still present in ChatGPT. Early users of the system have found that the service will happily fabricate convincing-looking nonsense on a given subject.
How we got to ChatGPT
This long article chronicles the roadmap to ChatGPT. With lots of diagrams.
• ars technica > “The generative AI revolution has begun – how did we get here?” by Haomiao Huang [1] (Jan 30, 2023) – A new class of incredibly powerful AI models has made recent breakthroughs possible.
(quote) There’s a holy trinity in machine learning: models, data, and compute. Models are algorithms that take inputs and produce outputs. Data refers to the examples the algorithms are trained on. To learn something, there must be enough data with enough richness that the algorithms can produce useful output. Models must be flexible enough to capture the complexity in the data. And finally, there has to be enough computing power to run the algorithms.
The big breakthrough in language models… was discovering an amazing model for translation and then figuring out how to turn (transform) general language tasks into translation problems.
The “generative” part is obvious—the models are designed to spit out new words in response to inputs of words. And “pre-trained” means they’re trained using this fill-in-the-blank method on massive amounts of text.
[The evolution of computer vision research … to Dall-E] Deep learning started to change all of this. Instead of researchers manually creating and working with image features by hand, the AI models would learn the features themselves – and also how those features combine into objects like faces and cars and animals.
Transformers are general-purpose tools for figuring out the rules in one language and then mapping them to another. So if you can figure out how to represent something in a similar way as to a language, you can train transformer models to translate between them.
OpenAI was able to scrape the Internet to build a massive data set that can be used to translate between the world of images and text.
As long as there’s a way to represent something with a structure that looks a bit like a language [ordered sequences], together with the data sets to train on, transformers can learn the rules and then translate between languages.
Another consideration is that these AI models are fundamentally stochastic. … There’s no explicit concept of a right or wrong answer – just how close it is to being correct.
The basic workflow of these models is this: generate, evaluate, iterate.
… many of the capabilities these new models are showing are emergent, so they aren’t necessarily being formally programmed. … to explicitly answer questions … without having to be explicitly designed to answer Q&As.
The future
• Wired > “ChatGPT Has Investors Drooling – but Can It Bring Home the Bacon?” by Will Knight (Jan 13, 2023) – The loquacious bot has Microsoft ready to sink a reported $10 billion into OpenAI. It’s unclear what products can be built on the technology.
• Wired > “How to Stop ChatGPT from Going Off the Rails” by Amit Katwala (Dec 16, 2022) – The viral chatbot wasn’t up to writing a WIRED newsletter. But it’s fluent enough to raise questions about how to keep eloquent AI systems accountable.
• Wired > “ChatGPT Is Coming for Classrooms. Don’t Panic” by Pia Ceres (Jan 26, 2023) – The AI chatbot has stoked fears of an educational apocalypse. Some teachers see it as the reboot education sorely needs.
Notes
[1] Haomiao Huang is an investor at Kleiner Perkins, where he leads early-stage investments in hardtech and enterprise software. Previously, he founded the smart home security startup Kuna, built self-driving cars during his undergraduate years at Caltech and, as part of his Ph.D. research at Stanford, pioneered the aerodynamics and control of multi-rotor UAVs.
Concerns about plagiarism quickly arose after ChatGPT was released. Not just in education.
And there’s a technical challenge as well: as generative AI models increasingly contribute to online content (text, images, etc.), not only will web search engines increasingly index AI-written fabrications, but those AI’s ongoing training data sets themselves will be polluted (as noted in “ChatGPT Has Investors Drooling – but Can It Bring Home the Bacon?” above).
So, spotting (and flagging) AI-written texts (vs. human-written texts) is important.
This CNBC article notes that the accuracy of OpenAI’s new classifier tool is relatively low. Even false positives. And with some limitations on input size (number of characters).
• CNBC > TECH > “ChatGPT maker OpenAI comes up with a way to check if text was written by a human” by Jordan Novet (Jan 31, 2023) – identifying synthetic text is no easy task.
“Google’s search interface … [is] bloated with ads and marketers trying to game the system.”
So, OpenAI started with a noble concept:
Non-profit no more, eh. Here’s a FAQ.
• Washington Post > “What to know about OpenAI, the company behind ChatGPT” by Pranshu Verma (Feb 6, 2023)
See also:
Meta’s chabot was released before ChatGPT debuted. It was boring. No giddiness ensued.
In playing catch-up, will Big Tech forgo safety guardrails? (And invite reputational risk or liability if a response is found to be harmful or plagiarized?)
• Washington Post > “Big Tech was moving cautiously on AI. Then came ChatGPT” by Nitasha Tiku, Gerrit De Vynck and Will Oremus (Feb 3, 2023) – Google, Facebook and Microsoft helped build the scaffolding of AI. Smaller companies are taking it to the masses, forcing Big Tech to react.
As with other technologies … with eyes (wide) open – how instill “keen or complete knowledge, awareness, or expectations” about ChatGPT et al.
Here’s an interesting article about public education and AI chatbot literacy à la media literacy. A story about middle / high school computer science teachers using ChatGPT to create lesson plans for students to assess. A sort of Turing test for educational tools, eh?
• Cambridge Dictionary > media awareness:
• NY Times > “At This School, Computer Science Class Now Includes Critiquing Chatbots” (Feb 6, 2023) [subscription wall]
If you’ve been following the “giddiness” about ChatGPT, then you knew this was coming … using that AI tech to enhance search engines – with a chat interface. How to avoid releasing an unreality engine, eh.
• Wired > “The Race to Build a ChatGPT-Powered Search Engine” by Will Knight (Feb 6, 2023) – A search bot you converse with could make finding answers easier—if it doesn’t tell fibs. Microsoft, Google, Baidu, and others are working on it.
Google’s “Bard” chatbot will be released this week [1]. Not sure how that’s related to their investment in Anthropic AI, as discussed in this LA Times article.
• LA Times > “Google invests millions in AI startup rival to ChatGPT” by Davey Alba and Dina Bass (2-6-2023)
Notes
[1] What’s in a name, eh?
• NY Times > “Racing to Catch Up With ChatGPT, Google Plans Release of Its Own Chatbot” by Cade Metz and Nico Grant (Feb. 6, 2023) – Google said on Monday that it would soon release an experimental chatbot called Bard as it races to respond to ChatGPT, …
As predicted, Microsoft released an AI-enhanced version of their search engine.
• Macworld > “Hands-on: Microsoft’s new AI-powered Bing can write essays and plan vacations” by Mark Hachman, Senior Editor (Feb 8, 2023) – the fresh AI experience already works shockingly well more often than not..
And there’s the financial infatuation:
• Seeking Alpha > “Microsoft rises as analysts praise new AI-powered Bing, Edge” (February 8, 2023) [Subscription wall]
• CNBC > “Microsoft CEO Nadella calls A.I.-powered search biggest thing for company since cloud 15 years ago” by Ashley Capoot (Updated Feb 8, 2023)
The current obsession of the market with all things labeled AI reminds me of all the predatory business models still in play based on hype about scalability of an app-based service – build the app of dreams and “they will come,” eh.
• Yahoo Finance > “Investors obsessing over AI is latest symptom of the ‘Amazon disease’” – Morning Brief by Myles Udland, Head of News (February 8, 2023)
Synthetic content: What could possibly go … ? The darkside: “a sophisticated propaganda campaign from a foreign government.” Sans any watermarks or “poisoned, planted content,” eh.
• Wired > “How to Detect AI-Generated Text, According to Researchers” by Reece Rogers (Feb 8, 2023) – there’s an underlying capricious quality to our human style of communication …
Microsoft’s Bing chat and Google’s Bard are much in the chatbot (generative) AI news. And Apple? Here’s a tech recap and possible arc for Apple.
• Macworld > “Is Apple paying any attention to the ChatGPT AI arms race?” by Jason Cross, Senior Editor (Feb 9, 2023) – Does Apple have something amazing up its sleeve – beyond its current Neural Engine?
A chatbot AI $$$ race?
• CNET > “Microsoft’s AI-Powered Bing Challenges Google Search” by Stephen Shankland (Feb 8, 2023) – Microsoft will show ads next to the new AI search results, Mehdi [Yusuf Mehdi, chief consumer marketing officer] said.
Search, ask for more details, write something, get context for a website, ask a broad question, …
• CNET > “5 Things to Try With Microsoft’s New AI-Powered Bing” by Laura Hautala (Feb 10, 2023) – There’s a waiting list for the AI-powered Bing service now, and Microsoft says it’ll be broadly available and free to use in the coming months.
Over-the-top claims for ChatGPT-4?
As Spock often says, “fascinating.”
So, does the tendency for AI Large Language Models to make things up (“hallucinate”) make them sort of like us? [1]
• Wired > “Some Glimpse AGI* in ChatGPT. Others Call It a Mirage” by Will Knight (Apr 18, 2023) – Understanding the potential or risks of AI’s new abilities means having a clear grasp of what those abilities are – and are not.
* AGI == artificial general intelligence [2]
Notes
[1] This question is more relevant after a couple of weeks talking with many Customer Support agents at a major telecomm services provider. All of them “hallucinated” – provided incorrect information – time after time, never just saying that they did not know. The typical agent always seemed inexperienced, and narrowly trained. Generally okay at routine things but not for more complicated actions. Intentionally so – as corporate policy – or via high turnover?
As noted in the article: “If GPT-4 succeeds on some commonsense reasoning tasks for which it was explicitly trained and fails on others for which it wasn’t, it’s hard to draw conclusions based on that.”
[2] Remember the HAL 9000 in the 1968 film 2001: A Space Odyssey? How was HAL trained?
As noted in the article: “We can’t help but see flickers of intelligence in something that uses language so effortlessly. ‘If the pattern of words is meaning-carrying, then humans are designed to interpret them as intentional, and accommodate that,’ Goodman [Noah Goodman, an associate professor of psychology, computer science, and linguistics at Stanford University] says.”
This Wired article reminded me of the classic Twilight Zone Season 1 Episode 7 “The Lonely” (November 13, 1959). In which a prisoner’s solitude on an asteroid is broken when he is given a relational fembot, to which he eventually bonds. The fembot “develops a personality that mirrors” the prisoner’s.
• Wired > “What Isaac Asimov’s Robbie Teaches About AI and How Minds ‘Work’” by Samir Chopra (Jul 330, 2023) – Even as our ancient ancestors granted natural elements (like the sun, the ocean) mental qualities, do most people want to know how AI agents “really work” internally? Do they even care?