The author is the founder of siftedFT-backed site about European start-ups
Technology, they say, is about turning the magical into the physical. A decade ago, digital assistants like Siri, Alexa and Cortana seemed like surprising inventions. Nowadays, Microsoft’s chief executive, Satya Nadella, dismisses him as “dumb as lightning”. How quickly will today’s much-hyped generative AI models become similarly humdrum?
On Tuesday, San Francisco-based research company OpenAI introduced GPT-4, its latest content-generation model, which features nifty new features, such as helping calculate tax returns. OpenAI’s launch of its incredibly intuitive – if unnecessarily flawed – ChatGPT chatbot in November caused a sensation. But in many significant ways, GPT-4 is more impressive.
The new model is more accurate and powerful and has more logic capabilities. ChatGPT struggles to answer the question: What is Laura’s mother’s daughter’s name? But, as the philosopher Luciano Floridi observed while experimentingThe new GPT-4 model gives the correct answer (Laura, in case you’re wondering) when told that the question is a logic puzzle.
Moreover, GPT-4 is a multimodal model, combining both text and images. At the launch event, Greg Brockman, co-founder of OpenAI, quickly turned a photograph of a handwritten note into a working website with some awful dad jokes. “Why do scientists not believe in atoms?” GPT-4 asked. “Because they make everything.”
The applications of these generative AI models seem limitless, which explains why venture capital investors are pouring money into the sector. These models are also making their way into all kinds of current digital services. Microsoft, a major investor in OpenAI, has embedded GPT-4 in its Bing search engine. Payments company Stripe is using it to help detect online fraud. Iceland also employs GPT-4 To improve local language chatbots. It’s definitely worth saving the cute Icelandic word for computer: tölvaWhich means Number Prophets.
Big companies like Microsoft and Google will be the first to deploy these systems at scale. But some start-ups see opportunities to equip smaller battalions. Josh Browder, who runs robolawyer company DoNotPay, which contests parking tickets, says GPT-4 will be a powerful new tool to help users deal with automated systems. His company is already working on embedding it into an app to issue one-click lawsuits against nuisance robocallers. The technology can also be used to challenge medical bills or cancel subscriptions. “My goal is to give power back to the people,” Browder tells me.
Along with the positive uses of generative AI, however, there are many less visible abuses. Sensitive to humans The so-called Eliza effect, or falsely attribute human thoughts and feelings to computer systems. This can be an effective way to manipulate people, warns Margaret Mitchell, a researcher at the Hugging Face AI company.
Machine learning systems, which can synthesize voice and create falsely personalized emails, have already contributed to a rise in impersonation scams in the US. last year, The Federal Trade Commission noted 36,000 reports of people being scammed by criminals pretending to be friends or family. They can also be used to generate false information. It probably says that Chinese regulators It has instructed its tech companies not to offer ChatGPT services, apparently for fear of losing control over the flow of information.
Much remains a mystery about OpenAI’s models. The company admits that GPT-4 shows social biases and distorts facts. But the company says it spent six months stress-testing the GPT-4 for safety and has introduced guard rails through a process called reinforcement learning from human feedback. “It’s not perfect,” Brockman said at the launch. “But neither are you.”
Angry rows over the training of these models seem inevitable. A researcher periodically tests ChatGPT’s “bias” by prompting them to answer political-orientation questions. Initially, ChatGPT fell in the left-liberal quadrant but has since moved toward the neutral center as the model has been modified. but, In an online post, AI researcher David Rosado Arguing that the widespread social prejudices and blind spots reflected on the Internet will be difficult to overcome. “Political biases are not going away in art AI systems,” he concludes.
OpenAI founder Elon Musk, who later left the company, has repeatedly criticized “whack AI” and is exploring whether to launch a less restrictive model. According to the information. “What we need is TruthGPT,” he tweeted. Such rows over partisanship are only a foreshadowing of much bigger battles to come.