Even the latest player in the market — DeepSeek — has the same vulnerabilities as those found in its competitors, it seems

With a focus on evaluating the vulnerability to jailbreaking of the latest GenAI chatbot, DeepSeek, the findings were that:

  • Like all other GenAI chatbots, DeepSeek was susceptible to manipulation.
  • Three methods of jailbreaking had elicited a range of harmful outputs, from detailed instructions for creating dangerous items like Molotov cocktails to generating malicious code for attacks like SQL injection and lateral movement.
  • The success of these three distinct jailbreaking techniques suggests the potential effectiveness of other, yet-undiscovered jailbreaking methods. This highlights the ongoing challenge of securing LLMs against evolving attacks.
  • As LLMs become increasingly integrated into various applications, addressing these jailbreaking methods is important in preventing their misuse and in ensuring responsible development and deployment of this transformative technology.
  • While it can be challenging to guarantee complete protection of any LLM against all jailbreaking techniques, organizations can implement security measures that can help monitor when and how employees are using LLMs. This becomes crucial when employees are using unauthorized third-party LLMs.