
Zhixiong Pan|Oct 10, 2025 06:14
After using o1-pro, o3-pro, and GPT-5-Pro for a long time, it’s clear that the Pro series models have a completely scientific mindset. Their content is highly structured and can’t be directly used to generate articles. Even for tasks like translation, their performance is pretty bad—once the content gets lengthy, the results are overly simplified.
But that doesn’t mean the Pro models are useless. When it comes to handling pure scientific content, analytical tasks, or anything requiring strong logic, the Pro models are better than the thinking models. However, the law of diminishing returns is obvious—the computation time increases several times, but the improvement in results is only marginal.
On the other hand, there are models with a more humanities-oriented mindset, like GPT-4.5 or Sonnet 4.5, which was mentioned by @howie_serious. These are designed for generating articles. In fact, the GPT series’ Deep Research models were also previously used for generating long-form content, and their performance is far better than those purely scientific (reasoning) models.
Share To
Timeline
HotFlash
APP
X
Telegram
CopyLink