AI chatbot reality check – the bottom line?

The cost of AI
The cost of AI

Generative AI chatbots might be cool to many. But the heat (greenhouse gas emissions) and cost may deflate hype as a reality check for the botton line.

Generative AI data center server infrastructure plus operating costs will challenge the business models and profitability of emergent services incorporating this tech [1].

• Washington Post > “AI chatbots lose money every time you use them. That’s a problem.” by Will Oremus (June 5, 2023) – The cost of operating the systems is so high that companies aren’t deploying their best versions to the public.

Key points
  • Chatbots lose money on every chat.
  • Better chatbot quality costs more. So, ads are probably coming to AI chatbots (but profitability will remain elusive, even with smaller, cheaper models).
  • The world’s richest [tech] companies may turn chatbots into moneymakers sooner than they may be ready to.
  • Companies that buy … AI tools [from companies building the leading AI language models] don’t realize they’re being locked into a heavily subsidized service …
  • The intensive computing AI requires is why OpenAI has held back its powerful new language model, GPT-4, from the free version of ChatGPT, which is still running a weaker GPT-3.5 model.
  • A single chat with ChatGPT could cost up to 1,000 times as much as a simple Google search.
  • Computing requirements also help to explain why OpenAI is no longer the nonprofit it was founded to be.
  • Tech giants are willing to lose money in a bid to win market share with their AI chatbots.
  • Companies adopting generative AI tools (even with all their flaws) might trim human jobs.
Related posts

• [1] Lords of AI – Tech giants and an International Agency > Comment 5/15/2023 > This article discusses a forecast for the industrial cost of AI services – a massive increase, despite ongoing improvements in hardware performance [1] – “As demand for GenAI continues exponentially.”

3 comments

  1. Two perspectives on the utility of AI chatbots: (1) for the general public, “demystifying the fallible Wizard of Oz behind the curtain” and boosting immunity to misinformation; (2) for businesses, crafting AI tech into one’s business policies & practices across brands and contractors.

    1. Dealing with major tech companies releasing AI technology “without ensuring that the general population understands its drawbacks.”

    • Wired > “Don’t Want Students to Rely on ChatGPT? Have Them Use It” by C.W. Howell (Jun 6, 2023) – Many students expressed shock and dismay upon learning the AI could fabricate bogus information, including page numbers for nonexistent books and articles.

    WHEN I FIRST caught students attempting to use ChatGPT to write their essays [in a religion studies class at Elon University], it felt like an inevitability. My initial reaction was frustration and irritation—not to mention gloom and doom about the slow collapse of higher education … But … Many of these essays used sources incorrectly, either quoting from books that did not exist or misrepresenting those that did. When students were starting to use ChatGPT, they seemed to have no idea that it could be wrong.

    Students, and the population at large, are not using ChatGPT in these nuanced ways because they do not know that such options exist [i.e., using newer ChatGPT models, prompt crafting, etc.].

    2. Creating in-house AI tools (in this case “Responsible AI Machine Partner, or RAMP”) as part of one’s business policies & practices. And establishing transparency and accountability for editors, writers, and contractors.

    • The Verge > “CNET is overhauling its AI policy and updating past stories” by Mia Sato (Jun 6, 2023) – In a memo shared today, CNET outlines how it could use AI systems in its journalism in the future. The policy promises that no stories will be entirely produced by an AI tool.

    Of the more than 70 stories published over the course of several months, CNET eventually issued corrections on more than half. … Stories now include an editor’s note reading, “An earlier version of this article was assisted by an AI engine. This version has been substantially updated by a staff writer.”

    Key points

    • Stories will not be written entirely using an AI tool, …

    • Hands-on reviews and testing of products will be done by humans

    • CNET will also not publish images and videos generated using AI “as of now.”

    • It will “explore leveraging” AI tools …

    • The AI policy update comes just weeks after CNET’s editorial staff announced they had formed a union with the Writers Guild of America, East …

    Robot writer

  2. Even some lawyers (not just the general population) appear to be naive regarding the state of generative AI. Acting in bad faith. Like Wizard of Oz behind the curtain. Yikes!

    • LA Times > “Lawyers who cited fake AI cases are fined” by Associated Press (June 24, 2023) – Federal judge says they drew on fictitious ChatGPT research in aviation injury claim.

    NEW YORK — A federal judge has imposed $5,000 fines on two lawyers and a law firm in an unprecedented case in which ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim.

    The judge said the lawyers and their firm … “abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”

    In a statement, the firm said … “we made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”

    Castel said the bad faith resulted from the attorneys’ failures to respond properly to the judge and their legal adversaries when it was noticed that six legal cases listed to support their March 1 written arguments did not exist.

    The chatbot … suggested several cases involving aviation incidents that [attorney] Schwartz hadn’t been able to find through the usual methods used at his law firm. Several of those cases weren’t real, misidentified judges or involved airlines that didn’t exist.

    The judge said one of the fake decisions generated by the chatbot had “some traits that are superficially consistent with actual judicial decisions” but that other portions contained “gibberish” and were “nonsensical.”

    Legal standing ...

  3. Buzzy fuzzy planet
    [Image credit: Montage from simpsons.fandom.com]

    So, hype-cycles are not new to Silicon Valley. And the 100’s of millions, the billions of dollars staked in technology waves. There was the dot-com bubble, the social media frenzy and shakeout; and there’s the ongoing quest for the self-driving car. Say crypto anyone?

    Business models, business models, business models … when noteworthy reports, as cited in this article below, fuzz the number buzz sans nuance, what could go … and real user numbers are not being disclosed by the tech companies.

    • Washington Post > “Every start-up is an AI company now. Bubble fears are growing.” by Gerrit De Vynck (August 5, 2023) – The 100 million number which Investment bank UBS reported for ChatGPT (in early 2023) was based on website visits, not official monthly active users.

    That report helped spark the AI fever … But it’s still unclear how and when this technology will actually become profitable — or if it ever will. There are already some reports that ChatGPT usage is declining. And “generative AI” is incredibly expensive to build and run — from specialized chips to data server computing power to expensive engineers.

    “At the end of the day, AI is just software, it’s expensive software,” Andrew Harrison, CEO of venture capital firm Section 32, said. “It’s low-margin software unless it does something that’s 10 times better.”

Comments are closed.