We need to build guardrails for AI -Dlight News

We need to build guardrails for AI

What if the only thing you could truly trust was something or someone close enough to physically touch? That could be the world AI is leading us to. A group of Harvard academics and artificial intelligence experts has just launched A report Microsoft-backed OpenAI’s apparently sentient chatbot, which debuted last week in a new and “improved” (depending on your point of view) version, GPT-4, aims to put ethical guidelines around the development of such a potentially dystopian technology.

The group, which includes Microsoft economist and researcher Glenn Weil, Harvard philosopher and director of the Safra Center for Ethics Daniel Allen, and several other businessmen, is sounding the alarm about “a plethora of experiments with decentralized social technologies.” . This includes the development of “highly persuasive machine-generated content (eg ChatGPT)” that threatens to disrupt the fabric of our economy, politics and society.

They believe we have reached a “constitutional moment” of change that requires an entirely new regulatory framework for such technologies.

Some of the dangers of AI, such as a Terminator-style future in which machines decide that humans have had their day, are well-trodden territory in science fiction — which, it should be noted, has a pretty good record of predicting where science is headed. itself will go back 100 years. But there are others who are less well understood. If, for example, AI can now generate completely untraceable fake IDs, what good are the legal and governance structures that rely on such documents to allow us to drive, travel or pay taxes?

One thing we already know is that AI can allow bad actors to pose as anyone, anywhere, anytime. “You have to assume that fraud is going to be much cheaper and more prevalent in this new era,” says Weil, who has published An online book With Taiwan’s Digital Minister, Audrey Tang. This shows the dangers that AI and other advanced information technologies pose to democracy, especially as it puts the problem of disinformation on steroids.

The potential impact permeates every aspect of society and the economy. How do we know if a digital fund transfer is secure or even authorized? Will online notaries and contracts be reliable? Will fake news, already a huge problem, become essentially undetectable? And what about the political implications of countless job disruptions, a topic that scholars Daron Acemoglu and Simon Johnson will explore in a very important book later this year.

One can easily imagine a world in which governments struggle to keep up with these changes and, as Harvard reports, “existing, highly imperfect democratic processes prove impotent . . . and are therefore abandoned by an increasingly disaffected citizenry.” .

We have already seen signs of this. The private Texas town built by Elon Musk to house employees of SpaceX, Tesla and the Boring Company is just the latest iteration of the libertarian fantasy of Silicon Valley in which the rich take refuge in private compounds in New Zealand, or move their assets and businesses. In outside government jurisdictions and “Special Economic Zones”. Wellesley historian Quinn Slobodian tackles the emergence of such zones in his new book, Cracked-up capitalism.

In this scenario, tax revenues fall, labor’s share falls, and the resulting zero-sum world increases the “exitocracy” of the privileged.

Of course, the future can be very bright too. AI has incredible potential to increase productivity and innovation and may even allow us to redistribute digital wealth in new ways. But what is already clear is that companies will not retreat as quickly as possible from developing advanced Web3 technologies, from AI to blockchain. They see themselves in an existential race with each other and with China for the future.

As such, they are looking for ways to sell not just AI, but security solutions for it. For example, in a world in which trust cannot be digitally verified, AI developers at Microsoft and other companies are considering whether there might be a way to create more advanced versions of “shared secrets” (or things that only you and other close individuals have). why about) can learn digitally and at scale.

However, it seems like solving a technology problem with more technology. In fact, the best solution to the AI ​​puzzle may, to an extent, be analog.

“We need a more prudential due diligence framework,” Allen says, citing a 2010 report by the President’s Commission on Bioethics, which was released in response to the rise of genomics. It created guidelines for responsible experimentation, which allowed for safer technological development (although one could point to new information about possible lab leaks in the Covid-19 pandemic, and say that no framework is internationally foolproof).

For now, in lieu of outlawing AI or having an outright mechanism for regulation, we can start by forcing companies to disclose what experiments they’re doing, what’s worked, what hasn’t, and where there might be unintended consequences. Transparency is the first step towards ensuring that AI does not get the better of its creators.

rana.foroohar@ft.com

Source link