Sunday, September 24, 2023

AIs will become useless if they keep learning from other AIs -Dlight News

Artificial intelligences that are trained on text and images from other AIs, which in turn have been trained on the AI’s output, could eventually become functionally useless.

AIs like ChatGPT, known as extensive language models (LLMs), use vast repositories of human-written text from the Internet to create a statistical model of human language, so they can predict which words are most likely to appear below in a sentence. . Since they’ve been available, the internet has been flooded with AI-generated text, but it’s unclear what effect this will have on future AIs.

Now, Ilia Shumailov at Oxford University and colleagues found that AI models trained on the results of other AIs become too biased, too simple and disconnected from reality, a problem they call model collapse.

This glitch occurs due to the way AI models statistically represent text. An AI that sees a phrase or sentence many times is likely to repeat it in a result and is less likely to produce something it has rarely seen. When new models are trained on text from other AIs, they only see a small fraction of the possible results of the original AI. This subset is unlikely to contain rarer outputs, so the new AI will not factor them into its own possible outputs.

The model has no way of knowing either. if the AI-generated text you see corresponds to reality, which could introduce even more misinformation than current models.

The lack of sufficiently diverse training data is compounded by shortcomings in the models themselves and the way they are trained, which do not always perfectly represent the underlying data in the first place. Shumailov and his team showed that this results in model collapse for a variety of different AI models. “As this process repeats itself, we are finally converging on this crazy state where there are just bugs, bugs, and bugs, and the magnitude of the bugs is far greater than anything else,” Shumailov says.

How quickly this process occurs depends on the amount of AI-generated content in an AI’s training data and what type of model it uses, but all models exposed to AI data seem to eventually crash.

The only way around this would be to label and exclude AI-generated results, Shumailov says. But this is impossible to do reliably, unless you have an interface where humans are known to enter text, such as Google or OpenAI’s ChatGPT interface, a dynamic that could entrench the already significant financial and computational advantages of big tech companies.

Some of the bugs could be mitigated by instructing AIs to give preference to training data before AI content flooded the web, he says Vinu Sadasivan at the University of Maryland.

It’s also possible that humans won’t post AI content on the internet without editing it themselves first, he says. Florian Tramer at the Swiss Federal Institute of Technology in Zurich. “Even if the LLM itself is biased in some way, the human request and filtering process could mitigate this to make the final results closer to the original human bias,” he says.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,870FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles