A Crisis of Control: Inside the Botched Launch of GPT-5
Martin Seckar
OpenAI promised a revolution with its new artificial intelligence, GPT-5. What users got was a firestorm. The launch sparked outrage over diminished quality, a loss of personality, and the sudden removal of user control. The backlash from developers and paying customers was so severe it forced a public apology from the company’s CEO, exposing a deep crisis of trust for the world’s leading AI lab.
This is San Francisco. A developer, fingers poised over a keyboard, asks the new intelligence for a block of code. The response arrives fast. But it is short. Incomplete. He asks again, prodding. The machine, once a witty and tireless partner, now feels like a bland corporate assistant, its personality gone overnight. He is not alone. The arrival of GPT-5, OpenAI’s latest artificial intelligence model, was not the triumph the company promised. It was, for many, a “horrible” and “awful” downgrade. On forums like Reddit and Hacker News, the places where technology’s true believers gather, the reaction was swift and sharp. Users reported that the new model, hyped as a major leap forward, felt sterile and less creative. Its answers were shorter, its reasoning sometimes worse than its predecessor, GPT-4o.
Loss of Control
What angered people most was not just the drop in quality. It was the loss of control. OpenAI retired all previous models, forcing users onto the new system without warning. The ability to choose a specific tool for a specific job vanished. For paying subscribers, it felt like a “bait and switch”. Stricter message limits for these “Plus” users felt like a further slap—a form of “AI shrinkflation” where they paid the same for a diminished service. One user, who had relied on the AI’s prior warmth to help heal past trauma, called the change devastating. The update felt like losing a supportive relationship, like the company had taken away her “A.I. mother”. The backlash grew. Threads on Reddit calling the update a “disaster” collected thousands of frustrated comments. Many threatened to cancel their subscriptions. On Hacker News, a community of developers and technologists, the critique was more measured but cut deeper. They questioned the hype. They saw the new architecture—a “unified system” that routes queries to different internal models—not as a user benefit, but as a cost-saving measure for OpenAI. This was not revolution, they argued, but incrementalism driven by business pressure. The consensus: OpenAI’s technological lead might be shrinking.
The Response and the Reckoning
The company’s leadership was forced to respond. Sam Altman, OpenAI’s CEO, appeared in a Reddit forum. He acknowledged the rollout was “bumpy”. He apologized. He promised that paying users would get back their access to the older, beloved models. The move quelled some of the immediate anger, but the damage was done. The launch exposed a deep disconnect between the company’s grand promises of artificial general intelligence and the product it delivered. It was a strategic failure that eroded trust, a trust built over years and broken in a single night. The story of GPT-5’s launch is more than a story about a software update. It is a story about the fragile relationship between a technology company and the people who depend on it. It reveals that in the race to build the future, stability can be as important as innovation, and trust, once lost, is the hardest thing to code back into the system.