A recent study from data protection startup Harmonic Security found that nearly one in 10 prompts used by business users when ...
A new report indicates that DeepSeek's R1 reasoning model refused to answer some 85% of 1,360 sensitive-topic "prompts".
The Chinese AI startup DeepSeek unintentionally exposed its complete stash of sensitive data due to an online exposure ...
China-based DeepSeek has exploded in popularity, drawing greater scrutiny. Case in point: Security researchers found more ...
While many experts consider AI to be the future of productivity gains at work, new research has warned that careless use of ...
A study by cybersecurity startup Harmonic Security found that 8.5% of prompts entered into generative AI models like ChatGPT, Copilot, and Gemini last year included sensitive information, putting ...
Traditional methods of securing data—such as perimeter defenses and endpoint antivirus—aren’t sufficient on their own, ...
A cybersecurity firm revealed that over a million lines of sensitive data were exposed by DeepSeek, a Chinese AI startup, ...
"In the case of DeepSeek, one of the most intriguing post-jailbreak discoveries is the ability to extract details about the ...