Based on over 4,400 Java tasks, the report finds that depending on which of the four levels of reasoning capabilities that OpenAI now makes available, the overall quality of the code, especially in terms of the vulnerabilities generated, significantly improves. However, the overall volume of code being generated per task also substantially increases, which creates additional maintenance challenges for application developers that are not going to be familiar with how code might have been constructed in the first place.
OpenAI's launch of its long-awaited GPT-5 AI model turned out to be a bit of a dud. Those expecting a revolutionary change, something CEO Sam Altman promised outright months ago, were left sorely disappointed. In many ways, GPT-5 felt more like an iterative improvement, while a colder and less personable tone took aback those looking to foster an emotional relationship with the bot.
Altman expressed optimism for Generation Z, stating, 'I would feel like the luckiest kid in all of history,' despite acknowledging potential job displacement from AI.
OpenAI's GPT-5 signifies a major enhancement in AI intelligence, offering user-friendly interactions by routing tasks automatically to the most suitable model, making it accessible to all.
"We use Echo Chamber to seed and reinforce a subtly poisonous conversational context, then guide the model with low-salience storytelling that avoids explicit intent signaling," security researcher Martí Jordà said.
In mid-2023, querying GPT-5 for a recipe could consume several to 20 times more energy than previous models, indicating increased resource demands for advanced AI responses.