15
Poisoning Attacks and LLMs
In the previous chapter, we explored large language models (LLMs) and how they redefine adversarial input attacks with prompt injections. Despite the similarities with evasion attacks, prompt injections are a more versatile adversarial attacker technique that harnesses the sophistication of the target LLM, especially its natural language processing (NLP) mixture of instructions and content. Similarly, LLMs change the attack vectors for poisoning attacks due to the shift of model ownership and development. Unlike predictive AI where the model is usually managed as part of the solution, in LLMs, the model is typically externally hosted. There are supply-chain issues with third-party models, but we will discuss them ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access