Chapter 9. Setting Safeguards
There’s always a level of risk associated with GenAI applications. That’s because they are built on top of foundational models, which are a nondeterministic technology that has the potential to provide inaccurate or hallucinated answers. Foundational models are also a general-purpose technology, so their responses may not always align with what you want them to do.
In this chapter, we discuss four patterns that can help you set safeguards around your GenAI applications. Template Generation (Pattern 29) is useful in situations where the risk involved in sending content without human review is very high but human review will not scale to the volume of communications. Assembled Reformat (Pattern 30) helps in situations where content needs to be presented in an appealing way but the risk posed by dynamically generated content is too high. Self-Check (Pattern 31) helps you identify potential hallucinations cost-effectively. Finally, Guardrails (Pattern 32) are a catchall way to apply safeguards around your core GenAI applications to ensure that they operate within ethical, legal, and functional parameters.
Pattern 29: Template Generation
The Template Generation pattern reduces the number of items that need human review by pregenerating templates that can be reviewed offline. At inference time, all the application needs to do is deterministic string replacement on the reviewed template. This makes the final responses safe to send to consumers without additional ...