Elevating humanity – OpenAI’s narrative for AGI

What could possibly go ...

Regarding the timeframe for achieving and qualifying Artificial General Intelligence (AGI), recently (December 4, 2024) on CNBC, Andrew Ross Sorkin interviewed Sam Altman, co-founder and C.E.O. of OpenAI, at the New York Times annual DealBook summit at Jazz at Lincoln Center in New York City.

Computer problem
I’ve been elevated …

Altman said that quite capable AI agents (able to choreograph complex processes) will become available for businesses in a few years.

I wonder how this might reshape individual merit [1] and trust in the workplace. And when AGI (whatever the scope) arrives, …

• CNBC > “OpenAI’s Sam Altman on launching GPT4” – Sam Altman, OpenAI CEO, discusses the release of ChatGPT (12-4-2024)

• The Verge > “Sam Altman lowers the bar for AGI” by Alex Heath (Dec 4, 2024) – OpenAI’s charter once said that AGI will be able to “automate the great majority of intellectual labor.”

Nearly two years ago, OpenAI said that artificial general intelligence — the thing the company was created to build — could “elevate humanity” and “give everyone incredible new capabilities.”

“My guess is we will hit AGI sooner than most people in the world think and it matter much less,” he said during an interview with Andrew Ross Sorkin at The New York Times DealBook Summit on Wednesday. “And a lot of the safety concerns that we and others expressed actually don’t come at the AGI moment. AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call super intelligence.”

We at The Verge have heard OpenAI intends to weave together its large language models and declare that to be AGI.

Notes

[1] The Aristocracy of Talent

… there is one idea that still commands widespread enthusiasm: that an individual’s position in society should depend on his or her combination of ability and effort. Meritocracy, a word invented as recently as 1958 by the British sociologist Michael Young, is the closest thing we have today to a universal ideology. – Wooldridge, Adrian. The Aristocracy of Talent: How Meritocracy Made the Modern World (2021) (p. 1). Skyhorse. Kindle Edition.

4 comments

  1. Chips for all ...Another AI evangelist promotes a new global infrastructure, wherein “data encodes society’s knowledge and culture and common sense … hopes and dreams.”

    • Wired > “Jensen Huang Wants to Make AI the New World Infrastructure” by Zeyi Yang (123-2024) – “People are starting to realize that AI is like the energy and communications infrastructure – and now there’s going to be a digital intelligence infrastructure.”

    Talking to WIRED senior writer Lauren Goode at The Big Interview event on Tuesday in San Francisco, Huang called the trend of AI “a reset of computing as we know of [it] over the last 60 years.” The force of AI is, he said, “so incredible, it’s not as if you can compete against it. You are either on this wave, or you missed that wave.

    If companies from the US and China are defining what our future looks like, other countries are rightfully worried about whether they can protect their own interests in the AI age. That’s what makes Huang’s “sovereign AI” pitch a popular one to governments worldwide.

    “Countries are awakened to the incredible capabilities of AI and the importance of AI for their own nations,” Huang said. “They realize that their data is part of their natural resources. Their data encodes their society’s knowledge and culture and common sense. Their hopes and dreams.

  2. AI doctor

    Here’s another perspective on elevation of services via AIs. Regarding “jobs that rely on emotional connections.” And the value of human attention. Empathy.

    Years ago, I noticed some doctors leaving practices based on health insurance and promoting “concierge medicine.” Some of the key talking points were better access, more personalized care, and more time with the doctor. Will the same stratification apply to the use of AI?

    This article has some interesting notes regarding being “fully present” for patients:

    • Human care and attention helps people to feel “seen,” and that sense of recognition underlies health and well-being as well as valuable social goods like trust and belonging.
    • … people who talked to their barista derived well-being benefits more than those who breezed right by them.
    • Researchers have found that people feel more socially connected when they have had deeper conversations and divulge more during their interactions.
    • As one pediatrician told me: “I don’t invite people to open up because I don’t have time. You know, everyone deserves as much time as they need, and that’s what would really help people to have that time, but it’s not profitable.”
    • Technology does not arrive on a blank slate, but intersects with existing inequalities, … the stratification of human connection.

    • Wired > “The Rich Can Afford Personal Care. The Rest Will Have to Make Do With AI” by Allison Pugh (Dec 7, 2024) – From personal trainers to in-person therapy, what happens if only the wealthy have access to human connection.

  3. Moving toward AGI

    Here’s a vision of AGI based on progressive levels of cognitive performance: high school to college to Ph.D. In the interview, Murati was deliberate in the use of the word cognitive; so, I wonder how that will align with emotional performance. And the range of competence across human tasks.

    Another factor is the interplay of synthetic data [1].

    Murati viewed current “market alignment” (the self-interest of competitors) addressing safety (which includes privacy) concerns. But that also depends on the social structure within which AIs operate.

    The longer term “vector” – regarding AGI – relies on our “agency,” as we collectively move forward.

    • Wired > “Mira Murati Quit OpenAI. She’s as Optimistic as Ever About AGI” by Paresh Dave (Dec 3, 2024) – includes video – At WIRED’s The Big Interview event, the ex-OpenAI CTO said she’s still in the midst of setting up her startup, but AGI is top of mind.

    Former OpenAI executive Mira Murati says it could take decades, but AI systems eventually will perform a wide range of cognitive tasks as well as humans do – a prospective technological milestone widely known as artificial general intelligence, or AGI.

    Murati started out in aerospace and then Elon Musk’s Tesla, where she worked on the Model S and Model X electric cars. She also oversaw product and engineering at virtual reality startup Leap Motion before joining OpenAI in 2018 and helping manage services such as ChatGPT and Dall-E. She became one of OpenAI’s top executives and was briefly in charge last year while board members wrestled with the fate of CEO Sam Altman.

    But it’s not all technological. “This technology is not intrinsically good or bad,” she said. “It comes with both sides.” It’s up to society, Murati said, to collectively keep steering the models toward good – so we’re well prepared for the day AGI comes.

    Notes

    [1] AI Overview – Synthetic Data

    Synthetic data is artificial data that mimics real-world data and is created using algorithms and simulations. It’s used in a variety of fields, including data science and machine learning, for research, testing, and development.

    Here are some characteristics of synthetic data:

    • Similar to real data

    Synthetic data has the same mathematical properties as the original data it’s based on, and when subjected to the same statistical analysis, it should produce similar results.

    • Privacy-preserving

    Synthetic data can be used to test and improve algorithms without compromising the privacy or security of real-world data.

    • Inexpensive

    Synthetic data is an inexpensive alternative to real-world data.

    • Can be created in large quantities

    Synthetic data can be generated in large quantities based on smaller data sets of real data.

    • Can be used to augment datasets

    Synthetic data can be used to augment existing datasets, especially when the original data is limited or biased.

    Some techniques for generating synthetic data include:

    Decision trees: A technique for generating synthetic data

    Deep learning algorithms: A technique for generating synthetic data

    Generative Adversarial Networks (GANs): A class of machine learning frameworks where two neural networks train each other iteratively

    Rule-based data generation: A technique where synthetic data is created based on a set of predefined rules and conditions

    Parametric models: A technique where synthetic data is generated by sampling from mathematical representations of the data distribution

    Random sampling: A technique where synthetic data is generated by randomly sampling from the existing data

    Linear interpolation: A technique where synthetic data points are generated between existing ones to create a smoother time-series representation

    Generative AI is experimental.

  4. Invoking the trickster

    I’ve been wondering about the consequences of abdicating literacy to AIs. What this might do to our reliance on individual merit and trust. This article highlights some current risks. Including the issue of the “liar’s dividend.”

    • Wired > “Worry About Misuse of AI, Not Superintelligence” by Arvind Narayanan (Dec 13, 2024) – AI risks arise not from AI acting on its own, but because of what people do with it.

    OpenAI CEO Sam Altman expects AGI, or artificial general intelligence – AI that outperforms humans at most tasks – around 2027 or 2028. Elon Musk’s prediction is either 2025 or 2026, and he has claimed that he was “losing sleep over the threat of AI danger.”

    Advanced AI is certainly a long-term worry that researchers should study. But AI risks in 2025 will arise from misuse, as they have thus far.

    Key points

    • Legal over-reliance – faulty or fake court briefs and case citations
    • Exploitive deepfakes
    • Exploitive promotion & sale of bogus products claiming to be “AI” (akin to products claiming to be “miracle” cures)
    • Abdication of selective social decisions to AI algorithms, especially for procedural enforcement or disenfranchisement

Comments are closed.