News

Anthropic’s developers recently upgraded the AI model Claude Sonnet 4 to support up to 1 million tokens of context, thereby ...
Claude Sonnet 4 can now support up to one million tokens of context, marking a fivefold increase from the prior 200,000, ...
Anthropic has expanded the capabilities of its Claude Sonnet 4 AI model to handle up to one million tokens of context, five ...
Anthropic upgrades Claude Sonnet 4 to a 1M token context window and adds memory, enabling full codebase analysis, long ...
The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit.
To account for the extra computing power required for large requests, Anthropic will increase the cost for Claude Sonnet 4 ...
Dan Shipper in Vibe Check Was this newsletter forwarded to you? Sign up to get it in your inbox. Today, Anthropic is releasing a version of Claude Sonnet 4 that has a 1-million token context window.
Anthropic AI has increased the context window for their Claude Sonnet 4 model to 1 million tokens, which is 5 times more than ...
Anthropic’s latest move to expand the context window, now in public beta, might encourage Google Gemini users to give it ...
Claude Opus 4 excels at coding and complex problem-solving, whereas Claude Sonnet 4 improves on Sonnet 3.7 and balances performance and efficiency.
Anthropic says Claude Opus 4 and Sonnet 4 outperform rivals like OpenAI's o3 and Gemini 2.5 Pro on key benchmarks for agentic coding tasks like SWE-bench and Terminal-bench.