GPT-4 from OpenAI shows advances — and money-making potential -Dlight News

GPT-4 from OpenAI shows advances — and money-making potential

Clinton Bicknell was let into one of the tech world’s great mysteries last September. The head of AI at language learning app Duolingo was given rare access to GPT-4, a new artificial intelligence model created by Microsoft-backed OpenAI.

He soon discovered that the new AI system was even more advanced than the previous version of OpenAI used to power the hit ChatGPT chatbot that provided real answers in response to text prompts.

Within six months, Bicknell’s team used GPT-4 to build their own sophisticated chatbot that human users could talk to, to practice conversational French, Spanish, and English, as if they were in a real-world setting like an airport or a cafe.

“It was amazing how the model had such a detailed and specific knowledge of how languages ​​work and the correspondences between different languages,” Bicknell said. “With GPT-3, which we’re already using, this isn’t just a viable feature.”

Duolingo is one of a handful of companies, including Morgan Stanley Wealth Management and online education group Khan Academy, that were given early access to GPT-4 ahead of its wider launch this week.

The release describes how OpenAI has transformed from a research-focused conglomerate into a company valued at around $30bn, racing giants such as Google in efforts to commercialize AI technology.

OpenAI announced that GPT-4 demonstrated “human-level” performance on a series of standardized tests such as the US bar exam and SAT school tests, and showed how its partners are using AI software to create new products and services. .

But for the first time, OpenAI did not disclose any details about the technical aspects of GPT-4, such as what data it was trained on or the hardware and computing capacity used, because of “both the competitive landscape and security implications.” “.

This represents a shift since OpenAI was formed in 2015 as a non-profit, in part, the brainchild of some of the tech world’s most radical thinkers, including Elon Musk and Peter Thiel. It was built on the principles of making AI accessible to everyone through scientific publications and developing the technology safely.

A pivot in 2019 turned it into a profitable enterprise with a $1bn investment from Microsoft. That was followed by more multibillion-dollar funding from the tech giant this year, with OpenAI quickly becoming a crucial part of Microsoft’s bet that AI systems will transform its business models and products.

The change prompted Musk, who left OpenAI’s board in 2018, to tweet this week that he is “still confused how a non-profit that I donated ~$100mn to for-profit became a $30bn market cap. If this is legal, why isn’t everyone doing it?”

OpenAI’s lack of transparency regarding the technical details of GPT-4 has drawn criticism from others in the AI ​​community.

“It’s so opaque, they’re saying, ‘Trust us, we did the right thing,'” said Alex Hanna, director of research at the Distributed AI Research Institute (DAIR) and a former member of Google’s ethical AI team. “They are cherry-picking these tasks, because there is no scientifically agreed benchmark set.”

GPT-4, which can be accessed through the $20 paid version of ChatGPT, has shown rapid improvement over previous AI models on certain tasks. For example, the GPT-4 scored in the 90th percentile on the Uniform Bar Exam taken by prospective lawyers in the US. ChatGPT only reached the 10th percentile.

While OpenAI hasn’t provided details, AI experts believe the model is larger than previous generations and has undergone much more human training to fine-tune it.

The most obvious new feature is that GPT-4 can accept input in both text and image form — although it only responds using text. This means users can upload a photo to ask the model to describe the image in painstaking detail, request meal ideas made with ingredients present in the image, or ask to explain the joke behind a visual meme.

GPT-4 is also capable of generating and ingesting larger amounts of text than other models of its type: users can feed up to 25,000 words, compared to 3,000 words in ChatGPT. This means that it can handle detailed financial documentation, literary works or technical manuals.

Its more advanced reasoning and parsing capabilities mean it’s more adept at analyzing complex legal contracts for risks, said Winston Weinberg, co-founder of Harvey, an AI chatbot built using GPT-4 and developed by PwC and Magic. Used by circle law. The firm Allen & Overy.

Despite this progress, OpenAI has warned of many risks and limitations of GPT-4. This includes its ability to provide detailed information on how to carry out illegal activities – including developing biological weapons, and generating hate and discriminatory speech.

OpenAI put GPT-4 through a safety testing process known as red-teaming, where more than 50 external experts in subjects ranging from medicinal chemistry to nuclear physics and disinformation were asked to try to break the model.

Paul Rottger, an AI researcher at the Oxford Internet Institute who focuses on the identification of toxic content online, was contracted by OpenAI for six months to receive and respond to harmful feedback from GPT-4. Graphic descriptions of violence or examples of extremism and hate speech are harmful.

He said the overall model improved its responses over the months of testing, where it would initially hedge its responses but later become more vague in its responses to bad prompts.

“On the one hand, safety research has progressed since GPT-3, and there are many good ideas to make this model safer,” he said. “But at the same time, this model is much more powerful and can do much more than GPT-3, so the risk surface is also much larger.”

Source link