The New Yorker:

GPT-5, a new release from OpenAI, is the latest product to suggest that progress on large language models has stalled.

By Cal Newport

Much of the euphoria and dread swirling around today’s artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled “Scaling Laws for Neural Language Models.” The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training?

Back then, many machine-learning experts thought that, after they had reached a certain size, language models would effectively start memorizing the answers to their training questions, which would make them less useful once deployed. But the OpenAI paper argued that these models would only get better as they grew, and indeed that such improvements might follow a power law—an aggressive curve that resembles a hockey stick. The implication: if you keep building larger language models, and you train them on larger data sets, they’ll start to get shockingly good. A few months after the paper, OpenAI seemed to validate the scaling law by releasing GPT-3, which was ten times larger—and leaps and bounds better—than its predecessor, GPT-2.

Go to link