July 2024
Intermediate to advanced
602 pages
16h 31m
English
In the previous chapter, we examined in detail how large language models (LLMs) change the attack vectors for poisoning. This is based on the paradigm shift toward external model hosting and access via APIs. However, this is changing, and open source or open-access models are becoming increasingly viable options. This chapter will explore the supply-chain risks third-party LLMs bring, especially with regard to model poisoning and tampering. New fine-tuning techniques, including model merges and model adapters, make these advanced scenarios that we need to understand.
Similarly, the LLM shift has redefined privacy adversarial attacks such as model inversion, influence, and model extraction, making them advanced ...
Read now
Unlock full access