← all news

Mistral Medium 3.5 pairs a self-hostable model with cloud coding agents

AI · · 2 weeks ago · source (mistral.ai)

Mistral's announcement is two things in one. Mistral Medium 3.5 is a dense 128-billion-parameter model the company calls its first flagship merged model, folding instruction following, reasoning, and coding into one set of weights, with a 256k context window and a footprint small enough to self-host on as few as four GPUs. It reports 77.6 percent on SWE-Bench Verified and 91.4 on τ³-Telecom, at API pricing of 1.5 dollars per million input tokens and 7.5 dollars per million output. The vision encoder was trained from scratch to handle variable image sizes and aspect ratios.

The second part is the workflow. Remote agents in Vibe run coding work asynchronously in isolated cloud sandboxes, launched from the CLI or Le Chat, opening pull requests and pinging you when done. The detail that stands out is continuity: a local CLI session can be teleported to the cloud while keeping its history, task state, and approval flow, so moving from your laptop to a long-running remote job is not a restart.

Why it matters

If you weigh hosted versus open models, a self-hostable flagship at this price and benchmark level changes the build-versus-buy math, and the teleport feature is a concrete answer to where long agent jobs run.

MistralAgents