DeepSeek helps speed up threat detection while raising national security concerns

by | Jan 29, 2025 | Technology

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

DeepSeek and its R1 model aren’t wasting any time rewriting the rules of cybersecurity AI in real-time, with everyone from startups to enterprise providers piloting integrations to their new model this month.

R1 was developed in China and is based on pure reinforcement learning (RL) without supervised fine-tuning. It is also open source, making it immediately attractive to nearly every cybersecurity startup that is all-in on open-source architecture, development and deployment.

DeepSeek’s $6.5 million investment in the model is delivering performance that matches OpenAI’s o1-1217 in reasoning benchmarks while running on lower-tier Nvidia H800 GPUs. DeepSeek’s pricing sets a new standard with significantly lower costs per million tokens compared to OpenAI’s models. The deep seek-reasoner model charges $2.19 per million output tokens, while OpenAI’s o1 model charges $60 for the same. That price difference and its open-source architecture have gotten the attention of CIOs, CISOs, cybersecurity startups and enterprise software providers alike.

(Interestingly, OpenAI claims DeepSeek used its models to train R1 and other models, going so far as to say the company exfiltrated data through multiple queries.)   

An AI breakthrough with hidden risks that will keep emerging

Central to the issue of the models’ security and trustworthiness is whether censorship and covert bias are incorporated into the model’s core, warned Chris Krebs, inaugural director of the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and, most recently, chief public policy officer at SentinelOne.

“Censorship of content critical of the Chinese Communist Party (CCP) may be ‘baked-in’ to the model, and therefore a design feature to contend with that may throw off objective results,” he said. “This ‘political lobotomization’ of Chinese AI models may support…the development and global proliferation of U.S.-based open source AI models.”

He pointed out that, as the argument goes, democratizing access to U.S. products should increase American soft power abroad and undercut the diffusion of Chinese censorship globally. “R1’s low cost and simple compute fundamentals call into question the efficacy of the U.S. strategy to deprive Chinese companies of access to cutting-edge western tech, including …

Article Attribution | Read More at Article Source