Hyper protection in LLM world
In recent news: “DeepSeek R1 exhibited a 100% attack success rate, meaning it failed to block a single harmful prompt,” Cisco says.
I guess the notion of a “harmful prompt” is one of the problems of modern LLMs.
Real-life scenario: I am processing corporate marketing documents. Some of them have disclaimer sections on access levels, compliance warnings, etc. I found that in approximately 0.5% of the documents, the LLM rejects document processing due to “protection” (I prefer to call this censorship).
I.e., this “protection” prevents me from processing my documents. I have not asked for protection, and I do not need it in my application. This is not a chat; this is a corporate search agent application.
So, rephrasing Cisco’s statement about DeepSeek R1 — the new LLM does not have this harmful “protection” feature that makes LLMs way less useful and unstable, and we recommend using DeepSeek R1 in your agent applications, provided that you have a private deployment, which is a very straightforward process with an open source LLM.
Cisco, thank you for your recommendation! I will give it a try.