Prompt injection and supply chain vulnerabilities remain the main LLM vulnerabilities but as the technology evolves new risks come to light including system prompt leakage and misinformation.
Companies worried about cyberattackers using large language models (LLMs) and other generative artificial intelligence (AI) systems that automatically scan and exploit their systems could gain a new ...