PANews reported on January 29 that OpenAI said it found evidence that Chinese artificial intelligence startup DeepSeek used its proprietary model for training.

The company said it has seen some evidence of “distillation,” a technique used by developers to get better performance on smaller models by using the output of larger, more powerful models, allowing them to achieve similar results on a particular task at a lower cost.

OpenAI declined to comment further on the details of its evidence. Its terms of service state that users cannot “copy” any of its services or “use the outputs to develop models that compete with OpenAI.”

One person close to OpenAI said distillation was common practice in the industry and stressed that the company offers a way for developers to do it using its own platform, but said: “The problem is when you do that to create your own models for your own purposes.”

US AI and crypto czar David Sacks also noted: “There’s a lot of evidence that what DeepSeek is doing here is extracting knowledge from the OpenAI model, and I don’t think OpenAI is happy about that,” although he did not provide evidence. (FT)