Chapter 8. Alignment Training and Reasoning
Some common reasons for hesitancy in adopting LLMs is the presence of hallucinations, the limitations in reasoning skills, and bias and safety issues. In this chapter, we will go through these limitations and introduce different techniques to mitigate them. First, we will introduce the concept of alignment training, which helps us steer our models toward desirable outcomes.
Defining Alignment Training
We keep hearing about the alignment problem facing language models. What does this mean in practice? Ideally we would like a language model that we can fully understand, control, and steer. However, current language models are far from this ideal.
Thus, the goal of alignment is to make language models more controllable and steerable. Askell et al. from Anthropic define an aligned AI as one that is “helpful, honest, and harmless.” They further define the three H’s as follows:
- Helpful
-
As long as a user request isn’t harmful, the AI should attempt to solve the request as effectively as possible, asking follow-up questions if needed.
- Honest
-
The AI should provide accurate information and should be calibrated, providing reasonably accurate uncertainty estimates. It should understand its shortcomings.
- Harmless
-
The AI should not be offensive or discriminatory and should refuse to perform tasks that can cause harm to individuals or society.
These are lofty principles. Can LLMs meet them? The field of alignment training comprises techniques ...