News
In August 2025, SonarSource released its latest State of Code study, The Coding Personalities of Leading LLMs – A State of Code Report. This research goes beyond accuracy scores, examining how large ...
Anthropic has upgraded Claude Sonnet 4 with a 1M token context window, competing with OpenAI's GPT-5 and Meta's Llama 4.
The Register on MSN6h
AI model 'personalities' shape the quality of generated code
But despite the differences, all models excel at making errors and shouldn't be trusted Generative AI coding models have common strengths and weaknesses, but express those characteristics differently ...
Anthropic has expanded the capabilities of its Claude Sonnet 4 AI model to handle up to one million tokens of context, five ...
To account for the extra computing power required for large requests, Anthropic will increase the cost for Claude Sonnet 4 ...
Anthropic’s Claude Sonnet 4 now supports a 1 million token context window, enabling AI to process entire codebases and complex documents in a single request—redefining software development and ...
Claude Sonnet 4 has been upgraded, and it can now remember up to 1 million tokens of context, but only when it's used via API ...
The new context window is available today within the Anthropic API for certain customers — like those with Tier 4 and custom ...
GPT-5, a new release from OpenAI, is the latest product to suggest that progress on large language models has stalled.
Anthropic upgrades Claude Sonnet 4 to a 1M token context window and adds memory, enabling full codebase analysis, long ...
The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit.
Anthropic's popular coding model just became a little more enticing for developers with a million-token context window.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results