Safetensors moves to the PyTorch Foundation, with neutral governance
Hugging Face announced that safetensors has joined the PyTorch Foundation under the Linux Foundation, alongside PyTorch, vLLM, Ray, and DeepSpeed. The format itself is the reason this matters. Loading model weights used to mean unpickling Python objects, which can run arbitrary code, so opening an untrusted checkpoint was a real security risk. Safetensors replaces that with a plain layout: a JSON header capped at 100MB describing the tensors, then raw tensor bytes. It also supports zero-copy and lazy loading, so you can read one weight without deserializing a whole checkpoint.
The governance change is the news here, not a new feature. The trademark, repository, and decision-making move from Hugging Face alone to a vendor-neutral foundation with a formal path to becoming a maintainer, while Hugging Face's core maintainers stay on the technical steering committee. Nothing breaks for users: the format, APIs, and Hub integration are unchanged, and existing models keep working. The roadmap includes deeper PyTorch integration, with device-aware loading for CUDA and ROCm without CPU staging, plus APIs for tensor- and pipeline-parallel loading.
Why it matters
If safetensors is in your model supply chain, neutral governance lowers the risk of a format you depend on being steered by one company, and the unchanged APIs mean there is nothing to migrate today.