Regarding the timeframe for achieving and qualifying Artificial General Intelligence (AGI), recently (December 4, 2024) on CNBC, Andrew Ross Sorkin interviewed Sam Altman, co-founder and C.E.O. of OpenAI, at the New York Times annual DealBook summit at Jazz at Lincoln Center in New York City.
Altman said that quite capable AI agents (able to choreograph complex processes) will become available for businesses in a few years.
I wonder how this might reshape individual merit [1] and trust in the workplace. And when AGI (whatever the scope) arrives, …
• CNBC > “OpenAI’s Sam Altman on launching GPT4” – Sam Altman, OpenAI CEO, discusses the release of ChatGPT (12-4-2024)
• The Verge > “Sam Altman lowers the bar for AGI” by Alex Heath (Dec 4, 2024) – OpenAI’s charter once said that AGI will be able to “automate the great majority of intellectual labor.”
Nearly two years ago, OpenAI said that artificial general intelligence — the thing the company was created to build — could “elevate humanity” and “give everyone incredible new capabilities.”
“My guess is we will hit AGI sooner than most people in the world think and it matter much less,” he said during an interview with Andrew Ross Sorkin at The New York Times DealBook Summit on Wednesday. “And a lot of the safety concerns that we and others expressed actually don’t come at the AGI moment. AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call super intelligence.”
We at The Verge have heard OpenAI intends to weave together its large language models and declare that to be AGI.
Notes
[1] The Aristocracy of Talent
… there is one idea that still commands widespread enthusiasm: that an individual’s position in society should depend on his or her combination of ability and effort. Meritocracy, a word invented as recently as 1958 by the British sociologist Michael Young, is the closest thing we have today to a universal ideology. – Wooldridge, Adrian. The Aristocracy of Talent: How Meritocracy Made the Modern World (2021) (p. 1). Skyhorse. Kindle Edition.
Another AI evangelist promotes a new global infrastructure, wherein “data encodes society’s knowledge and culture and common sense … hopes and dreams.”
• Wired > “Jensen Huang Wants to Make AI the New World Infrastructure” by Zeyi Yang (123-2024) – “People are starting to realize that AI is like the energy and communications infrastructure – and now there’s going to be a digital intelligence infrastructure.”
Here’s another perspective on elevation of services via AIs. Regarding “jobs that rely on emotional connections.” And the value of human attention. Empathy.
Years ago, I noticed some doctors leaving practices based on health insurance and promoting “concierge medicine.” Some of the key talking points were better access, more personalized care, and more time with the doctor. Will the same stratification apply to the use of AI?
This article has some interesting notes regarding being “fully present” for patients:
• Wired > “The Rich Can Afford Personal Care. The Rest Will Have to Make Do With AI” by Allison Pugh (Dec 7, 2024) – From personal trainers to in-person therapy, what happens if only the wealthy have access to human connection.
Here’s a vision of AGI based on progressive levels of cognitive performance: high school to college to Ph.D. In the interview, Murati was deliberate in the use of the word cognitive; so, I wonder how that will align with emotional performance. And the range of competence across human tasks.
Another factor is the interplay of synthetic data [1].
Murati viewed current “market alignment” (the self-interest of competitors) addressing safety (which includes privacy) concerns. But that also depends on the social structure within which AIs operate.
The longer term “vector” – regarding AGI – relies on our “agency,” as we collectively move forward.
• Wired > “Mira Murati Quit OpenAI. She’s as Optimistic as Ever About AGI” by Paresh Dave (Dec 3, 2024) – includes video – At WIRED’s The Big Interview event, the ex-OpenAI CTO said she’s still in the midst of setting up her startup, but AGI is top of mind.
Notes
[1] AI Overview – Synthetic Data
Synthetic data is artificial data that mimics real-world data and is created using algorithms and simulations. It’s used in a variety of fields, including data science and machine learning, for research, testing, and development.
Here are some characteristics of synthetic data:
• Similar to real data
Synthetic data has the same mathematical properties as the original data it’s based on, and when subjected to the same statistical analysis, it should produce similar results.
• Privacy-preserving
Synthetic data can be used to test and improve algorithms without compromising the privacy or security of real-world data.
• Inexpensive
Synthetic data is an inexpensive alternative to real-world data.
• Can be created in large quantities
Synthetic data can be generated in large quantities based on smaller data sets of real data.
• Can be used to augment datasets
Synthetic data can be used to augment existing datasets, especially when the original data is limited or biased.
Some techniques for generating synthetic data include:
Decision trees: A technique for generating synthetic data
Deep learning algorithms: A technique for generating synthetic data
Generative Adversarial Networks (GANs): A class of machine learning frameworks where two neural networks train each other iteratively
Rule-based data generation: A technique where synthetic data is created based on a set of predefined rules and conditions
Parametric models: A technique where synthetic data is generated by sampling from mathematical representations of the data distribution
Random sampling: A technique where synthetic data is generated by randomly sampling from the existing data
Linear interpolation: A technique where synthetic data points are generated between existing ones to create a smoother time-series representation
Generative AI is experimental.
I’ve been wondering about the consequences of abdicating literacy to AIs. What this might do to our reliance on individual merit and trust. This article highlights some current risks. Including the issue of the “liar’s dividend.”
• Wired > “Worry About Misuse of AI, Not Superintelligence” by Arvind Narayanan (Dec 13, 2024) – AI risks arise not from AI acting on its own, but because of what people do with it.
Key points