← all news

Claude Opus 4.7: bigger vision input, steadier long-running coding

AI · · 1 month ago · source (anthropic.com)

Anthropic released Claude Opus 4.7, and the headline is steadier long-running work rather than a single benchmark win. The post says it handles complex, multi-step coding with more consistency, and that testers were comfortable handing off their hardest work with less supervision. The concrete figures: a 13% gain on a 93-task coding benchmark, and about 3x more resolved production tasks than 4.6.

Vision improved too. Opus 4.7 accepts images up to 2,576 pixels on the long edge, around 3.75 megapixels and more than triple the old limit, and Anthropic reports a large jump on one visual-acuity benchmark, 98.5% against 54.5% for 4.6. It is better at multi-step agent loops, error recovery, and long-context memory across sessions. Pricing is unchanged at $5 per million input tokens and $25 per million output. A new xhigh effort level now sits between high and max.

There is a tradeoff Anthropic states plainly: the updated tokenizer uses 1.0 to 1.35x more tokens per input, though net efficiency improved on the coding tasks they tested. Details are on Anthropic's site.

Why it matters

If you run Claude in production, the tokenizer change is the line item to model first. More tokens per input can quietly offset the per-token price you already budgeted, so the 3x production-task claim is worth testing on your own workload before you assume a net win.

AnthropicClaudeModels