Monday, September 25, 2023

ChatGPT creator investigated by US regulators over AI risks -Dlight News

Get free artificial intelligence updates

The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging investigation into ChatGPT maker OpenAI.

In a letter to the Microsoft-backed company, the FTC said it will look into whether people have been harmed by the AI ​​chatbot’s creation of false information about them, as well as whether OpenAI “improperly or misleadingly” deals with privacy and data. Security transactions.

Generative AI products are in the crosshairs of regulators around the world, as AI experts and ethicists sound the alarm over the enormous personal data used by the technology, as well as its potentially harmful outputs, from misinformation to sexist and racist comments.

In May, the FTC warned the industry that it “focuses on how companies may choose to use AI technologies, including new generative AI tools, in ways that may have a real and significant impact on consumers.” has been”.

In its letter, the US regulator asked OpenAI to share internal content on how the group maintains user data, from steps the company has taken to address the risk of its models generating “false, misleading or offensive” statements.

The FTC declined to comment on the letter, which was first reported by The Washington Post. Writing on Twitter later Thursday, OpenAI Chief Executive Sam Altman It is called “Seeing the FTC request start with a leak is very disappointing and does not help build trust”. He added: “It is very important to us that our technology is safe and consumer-friendly, and we are confident that we comply with the law. Of course we will work with the FTC.”

Lina Khan, the FTC chairwoman, testified before the House Judiciary Committee Thursday morning and faced sharp criticism from Republican lawmakers over her tough enforcement stance.

When asked about the investigation during the hearing, Khan declined to comment on the investigation but said the regulator’s broader concerns were that ChatGPT and other AI services were being “fed a huge amount of data” while “there was no investigation into what kind of data there is.” .” are admitted to these companies.”

She added: “We have heard of reports where people’s sensitive information is being exposed in response to someone else’s enquiry. We hear about defamatory, defamatory statements, blatantly untrue things that are emerging. It is the type of fraud and deception that we are concerned about.”

Khan was also quizzed by lawmakers over his mixed record in court after the FTC suffered a major defeat this week in an attempt to block Microsoft’s $75bn acquisition of Activision Blizzard. The FTC appealed the decision on Thursday.

Meanwhile, committee chairman Republican Jim Jordan accused Khan of “harassing” Twitter after the company alleged in court filings that the FTC acted “erraticly and improperly” in enforcing the consent order it imposed last year.

Khan did not comment on Twitter’s filing but said all the FTC cares about is that “the company complies with the law”.

Experts are concerned about the huge amount of data generated by the language models behind ChatGPT. OpenAI had over 100 million monthly active users within two months of its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, is being used by more than one million people in 169 countries within two weeks of its release in January.

Users have reported that ChatGPT has created names, dates and facts, as well as fake links to news websites and references to academic papers, a problem known in the industry as “hallucinations”.

The FTC’s investigation digs into the technical details of how ChatGPT was designed, including the company’s work on fixing misconceptions and oversight of its human reviewers, which directly affect consumers. It has also sought information about efforts made by the company to assess customer complaints and customer perception of the chatbot’s accuracy and reliability.

In March, Italy’s privacy watchdog temporarily banned ChatGPT after it investigated the US company’s collection of personal information following a cybersecurity breach, among other issues. It was reinstated a few weeks later after OpenAI made its privacy policy more accessible and introduced a tool to verify the age of users.

Echoing earlier admissions about ChatGPT’s inadequacies, Altman tweeted: “We are transparent about the limitations of our technology, especially when we fall short. And our capped-profit structure means we’re not incentivized to make unlimited returns.” However, he said the chatbot was built on “years of security research”, adding: “We protect user privacy and design our systems to learn about the world, not private individuals.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,871FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles