Better to ask forgiveness than permission? – Big Tech regulatory framework

What could possibly go ...
“Silicon is our ultimate prosthesis” – Steven Levy (1997)

You want your kids to be good citizens, conscientious & responsible; but also resilient to new challenges and hardships (as “sh** happens”). So, how do you come up with guidelines for their growth that are as dynamic as the novel circumstances they’ll face?

Ditto for economic and social health and growth?

As noted elsewhere as a comment to my post “Lords of AI,” the news cycle this past week contained commentary on former FCC chair Tom Wheeler’s new book: Techlash: Who Makes the Rules in the Digital Gilded Age?

Steven Levy’s latest newsletter (cited below) provides more background on the state of regulation. In particular, his mention of the phrase “permissionless innovation” [vs. “precautionary principle”]. I looked that up, and found it a helpful (and well established) way to frame the policy debate. A vital divide in history, both for the United States and elsewhere.

Setting aside the ideological bent of the Cato Institute, this 2014 article in their online forum provides useful definitions for the contrasting perspectives.

… “Permissionless innovation” is a phrase of recent (but uncertain) origin … Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention or business model will bring serious harm to individuals, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later [as in the trope, “It’s easier to ask forgiveness than it is to get permission” – “It’s better to ask forgiveness than permission”].

Permissionless innovation is not an absolutist position that rejects any role for government. Rather, it is an aspirational goal that stresses the benefit of “innovation allowed” as the default position to begin policy debates. It switches the burden of proof to those who favor preemptive regulation and asks them to explain why ongoing trial-and-error experimentation with new technologies or business models should be disallowed.

This disposition stands in stark contrast to the sort of “precautionary principle” thinking that often governs policy toward emerging technologies. The precautionary principle refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

When the precautionary principle’s “better to be safe than sorry” approach is applied through preemptive constraints, opportunities for experimentation and entrepreneurialism are stifled. While some steps to anticipate or control for unforeseen circumstances are sensible, going overboard with precaution forecloses opportunities and experiences that offer valuable lessons for individuals and society. The result is less economic and social dynamism.

Of course, Wheeler’s book discusses additional history of the interplay (e.g., trust-busting), collateral damage, and implications.

And China’s approach to innovation is profiled in Keyu Jin’s book The New China Playbook: Beyond Socialism and Capitalism (2023). In particular, the pseudo-government roles of state-sponsored enterprises. And co-opting of state capacity.

Clearly, economic and social dynamism is a balancing act, as discussed in the 2019 book The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James A. Robinson.

Any position which casts in concrete a “setpoint” or “default” – as an absolute guideline – compromises state and societal resiliency. An attitude of “set it and forget it” is unwise, only “gaming” our future.

Do all Americans have access to broadband yet? Regulatory capture

• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “Here’s a new plan to rein in the gilded tech bros” (October 27, 2023) – Is it possible to come up with “regulation that is as innovative as the digital innovators themselves?”

Once Wheeler [formerly head lobbyist for not one, but two industries: cable TV and cellular telecom] took over, he displayed a bent for bucking the big communications and tech giants, and looking out for the people. He managed to get net neutrality rules passed. He went to Facebook’s headquarters and argued with Mark Zuckerberg about the company’s self-serving scheme to provide free data to India and other underserved countries. He came to despise the term “permissionless innovation,” which cast public-minded regulators like himself as nosy opponents of progress.

When I note this to Wheeler [the strident tone in his book], the former lobbyist hastens to say he’s not really arguing for revolution. “I’m a capital-C capitalist,” he says. “But capitalism works best when it operates inside guardrails. And in the digital environment, we’re existing in a world without guardrails.”

“The digital platforms collect, aggregate, and then manipulate personal data at marginal costs approaching zero,” he writes. “Then after hoarding the information, they turn around and charge what the market can bear to those who want to use that data … It is, indeed, the world’s greatest business model.” While the subtitle of his book is a question, the answer is obvious and depressing. “Thus far it is the innovators and their investors who make the rules,” he says. “At first this is good, but then they take on pseudo-government roles, and start infringing on the rights of others, and impairing the public interest.”

2 comments

  1. Balancing benefits and risks

    So, how to “harness the benefits of AI and mitigate the risks?” Going beyond voluntary commitments by AI companies … Crafting guardrails … enabling competitiveness … but needing bipartisan Congressional action.

    • Washington Post > “Biden to sign sweeping artificial intelligence executive order” by Cristiano Lima and Cat Zakrzewski (October 30, 2023) – The action represents the U.S. government’s most ambitious attempt to spur innovation and address concerns the burgeoning technology could exacerbate bias, displace workers and undermine national security.

    The sprawling order tackles a broad array of issues, placing new safety obligations on AI developers [e.g., red-teaming] and calling on a slew of federal agencies to mitigate the technology’s risks while evaluating their own use of the tools, according to a summary provided by the White House.

    The executive order comes just days before [VP] Harris is expected to promote the United States’ vision for AI regulation at Britain’s AI Summit, a two-day event that will gather leaders from around the world to talk about how to respond to the most risky applications of the technology.

    • Agencies will be required to continuously monitor and evaluate deployed AI.

    • Government will be directed to develop standards for companies to label AI-generated content [e.g., watermarking].

  2. AI Executive Order stamp

    UPDATE 11-3-2023

    Artificial Intelligence
    Safety • Security • Trust

    Here’s the Executive Order:

    • WhiteHouse.gov > “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (October 30, 2023)

    [Excerpt]

    By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:

    Section 1. Purpose. Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.

    • CNBC > “Biden issues U.S.′ first AI executive order, requiring safety assessments, civil rights guidance, research on labor market impact” by Hayden Field, Lauren Feiner (October 30, 2023)

Comments are closed.