3 Data privacy and safety with LLMs
This chapter covers
- Improving the safety of outputs from LLMs
- Mitigating privacy risks with user inputs to chatbots
- Understanding data protection laws in the United States and the European Union
In the previous chapter, we discussed how large language models (LLMs) are trained on massive datasets from the internet that are likely to contain personal information, bias, and other types of undesirable content. While some LLM developers use the unrestricted nature of their models as a selling point, most major LLM providers have a set of policies around the kinds of content they don’t want the model to produce and are dedicating a great deal of effort to ensuring that their models follow those policies as closely ...
Get Introduction to Generative AI now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.