16
Advanced Generative AI Scenarios
In the previous chapter, we examined in detail how large language models (LLMs) change the attack vectors for poisoning. This is based on the paradigm shift toward external model hosting and access via APIs. However, this is changing, and open source or open-access models are becoming increasingly viable options. This chapter will explore the supply-chain risks third-party LLMs bring, especially with regard to model poisoning and tampering. New fine-tuning techniques, including model merges and model adapters, make these advanced scenarios that we need to understand.
Similarly, the LLM shift has redefined privacy adversarial attacks such as model inversion, influence, and model extraction, making them advanced ...
Get Adversarial AI Attacks, Mitigations, and Defense Strategies now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.